How can I convert a hex number to integer in SQL, when the resulting value is larger than bigint? - sql

I have a hex number in string form, for example, 0x47423f34b640c3eb6e2a18a559322d68. When I try to convert this to an int, for comparison with another value that is an int, I run up against the BIGINT size limit. I've tried various methods such as decimal, etc, but dont seem to be able to get past the 64bit size limit.
Ideally, I'd like a conversion that converts from a hex string, to an int string, so I can bypass the int limits. However, doing this without using intermediate conversions (to values that are out of range) is causing me some problems.
An alternative solution would be to be able to convert from a decimal string (the other comparison value, which in the example given, is 94719161261466374640962917648041127272), to hex, or to binary, for comparison. I already have a routine that can convert arbitrary length hex strings to binary without overflowing intermediate variables, but I haven't had much luck doing decimal strings to binary without using intermediate variables, so I can't currently compare them that way either.
I already have a c# based solution for this conversion, so I could use SQL CLR or other similar solutions, but I'd much prefer a native SQL method for doing this conversion (decimal string to hex string or binary string, or hex string to decimal string or binary string).

The SQL Server 2008 release updated the See CONVERT() here function to be able to convert hexadecimal values:
select convert(bigint, convert (varbinary(8), '0x0000010d1858798c', 1))
Result:
1155754654092 (decimal) ( == 0x0000010d1858798c )

Related

Truncation using round function isn't achieved as expected in sql server

I have a field stored in float datatype. I need to convert it to numeric without it getting implicitly rounded in the process of conversion.
I have tried round(float_data,scale,1). Seems to work fine for most of the cases.but when the number of digits after decimal places is less than scale mentioned in round function it tries to floor down the number rather than appending 0 at the end.
For instance, round (0.0243,5,1) returns 0.02429. Why isn't it simply truncating the number to the number of digits mentioned?
I know this issue is when we use float as source datatype but I cannot change the source datatype.
The same truncation happens right when the same is achieved via ssis. Is there any way in sql to achieve this?
Because when converted to a float, the decimal 0.0243 is stored as 0.02429999969899654388427734375, which truncates to 0.02429. Looks like you want to round instead of truncate, eg
declare #f float = 0.0243
select round(#f,5,0)

How to cast char without decimal point to decimal in Teradata?

I've got a field where data is stored as char(9). The content is numeric and actually a decimal value. Unfornutately the stored value itself doesn't contain a decimal point, only numbers.
I want to store this value inside a decimal(9,2) field. I know that I could use string functions to add a decimal point at the right position, but I'm wondering if there is a nicer way to do this cast. Maybe with TO_NUMBER and the right format string?
Example:
CHAR(9): '000123456' -> DECIMAL(9,2): 1234.56
If you want to avoid string functions, it might be easier to cast to decimal then divide by 100

SQL Server: Convert FLOAT to NVARCHAR, maximum accuracy, without scientific notation

Without using scientific notation, I need to convert a FLOAT to a string, without showing scientific notation, and capturing all possible precision.
For example, when I execute SELECT 1E0 / 1346E0 I get the following result:
This is how SQL Server displays a FLOAT value by default.
In this case, it displays 18 decimal places, which is more than the STR function can provide.
It also does not add any trailing zeros.
If SQL Server Management Studio can do this, can I also get this conversion in my code?
I need to avoid scientific notation at all costs, even if there are 20 leading zeros after the decimal point. A long string is not a problem.
Unfortunately, the CONVERT function does not provide what I need, even with style 3
try format()
SELECT
1E0 / 1346E0
, format(1E0 / 1346E0,'N18')
declare #float float = 0.000742942050520059
select cast(cast(#Float as decimal(38,35)) as varchar(200))
As was also noted, format works too, although I'm not a huge fan of it as it's a kind of heavy hitting CLR. but for one offs, it's fine.

SQL Cast quirkiness

I have always used CONVERT (and not CAST), as I assumed the former would recognize types and do an appropriate conversion where as the latter is simply trying to interpret a stream of bytes differently. But just learned that CAST=CONVERT for most purposes!
But can someone explain why the following happens. CAST produces different results for the same value (101), but represented differently - decimal (101) and hexadecimal (0x65) representations.
select cast(0x65 as varchar(5))
-----
e
select cast(101 as varchar(5))
-----
101
EDIT:
The query was run from SSMS.
I assume you are using SQL Server (where the confusion between the two functions would make sense).
That is simple. 0x defines a binary constant. 101 is a numeric constant. These are not the same thing.
When you convert a binary constant to a string, it attempts to interpret the constant as a character. When you convert a number to a string, SQL Server converts the decimal representation.
You can learn more about constants in the documentation.
You are trying to convert to completely different values. As Gordon mentioned, one is binary representation while the other is numeric.
But you need to note that there is some differences between CAST and CONVERT:
CAST is part of the ANSI-SQL specification; whereas, CONVERT is not. In fact, CONVERT is Microsoft SQL Server implementation specific.
CONVERT differences lie in that it accepts an optional style
parameter which is used for formatting.
Read more here: https://www.essentialsql.com/what-is-the-difference-between-cast-and-convert/

Convert char for bit data to integer in DB2

I'm writing a DB2 user-defined function for which I need an array of non-negative integers, which I represent as a varchar for bit data. I plan to use two bytes for each integer (giving me a maximum value of 2^16-1, which is acceptable).
I can convert an integer to a char for bit data by using the chr function, but how do I get it back to an integer?
Any additional advice on bit manipulation in DB2 procedures would be helpful as well, as I can't seem to find much documentation on it. I'm using v9.1 on Linux.
I'm not sure if CHR is actually what you want. According to the documentation, the CHR function:
Returns the character that has the ASCII code value specified by the argument. The argument can be either INTEGER or SMALLINT. The value of the argument should be between 0 and 255; otherwise, the return value is null.
The opposite of the CHR function is the ASCII function.
Full list of DB2 scalar procedures is here.
I'm not sure if writing a UDF in this way is the best for what you're trying to do. You may want to consider writing a stored procedure that's not in SQL. There's a list of supported languages, like Java, C, C++ etc.