converting float to varchar sql - sql

Im writing a stored procedure that will convert float columns to varchar as part of its process when returning the data.
i dont want to convert everything to varchar(max) bec i think its probably more efficient not to. what is the largest size varchar i need to use -
convert(NVARCHAR(????), floatcolumn)
100?
i want to make sure i never get a result that looks like 8397Xe10

Presumably, you are using SQL Server (based on the code in your question). If you don't want exponential notation, then use the str() function (documented here). The length of the string doesn't have a large impact on performance, but you can do something like:
select str(floatcolumn, 100, 8) -- or whatever you think reasonable bounds are for your data

Related

Truncation using round function isn't achieved as expected in sql server

I have a field stored in float datatype. I need to convert it to numeric without it getting implicitly rounded in the process of conversion.
I have tried round(float_data,scale,1). Seems to work fine for most of the cases.but when the number of digits after decimal places is less than scale mentioned in round function it tries to floor down the number rather than appending 0 at the end.
For instance, round (0.0243,5,1) returns 0.02429. Why isn't it simply truncating the number to the number of digits mentioned?
I know this issue is when we use float as source datatype but I cannot change the source datatype.
The same truncation happens right when the same is achieved via ssis. Is there any way in sql to achieve this?
Because when converted to a float, the decimal 0.0243 is stored as 0.02429999969899654388427734375, which truncates to 0.02429. Looks like you want to round instead of truncate, eg
declare #f float = 0.0243
select round(#f,5,0)

SQL data type - recommendation for 'unknown' number

I'm pulling in some external data into my MSSQL server. Several columns of incoming data are marked as 'number' (it's a json file). It's millions of rows in size and many of the columns appear to be decimal (18,2) like 23.33. But I can't be sure that it will always be like that, in fact a few have been 23.333 or longer numbers like 23.35555555 which will mess up my import.
So my question is given a column is going to have some kind of number imported into it, but I can't be sure really how big or how many decimal places it's going to have... do I have to resort to making my column a varchar or is there a very generic number kind of column I'm not thinking of?
Is there a max size decimal, sort of like using VARCHAR(8000) or VARCHAR(MAX) ?
update
This is the 'data type' of number that I'm pulling in:
https://dev.socrata.com/docs/datatypes/number.html#
Looks like it can be pretty much any number, as per their writing:
"Numbers are arbitrary precision, arbitrary scale numbers."
The way I handle things like this is to import the raw data into a staging table in a varchar(max) column.
Then I use TRY_PARSE() or TRY_CONVERT() when moving it to the desired datatype in my final destination table.
The point here is that the shape of the incoming data shouldn't determine the datatype you use. The datatype should be determined by the usage of the data once it's in your table. And if the incoming data doesn't fit, there are ways of making it fit.
What do those numbers represent? If they are just values to show you could just set float as datatype and you're good to go.
But if they are coordinates or currencies or anything you need for absolute precise calculations float might sometimes give rounding problems. Then you should set your desired minimal precision with decimal and simply truncate what's eventually over.
For instance if most of the numbers have two decimals, you could go with 3 or 4 decimal points to be sure, but over that it will be cut.

data conversion issue

I have a table in which one of field is Real data type. I need to show the values in decimal format like #.###. So i'm converting the real values to decimal. But when i convert for some values it is not generating actual value. For eg:- 20.05 is the actual value. multiple it by 100 and then it to decimal(9,4) it will return like 2004.9999.
select cast(cast(20.05 as real)*100.00 as decimal(9,4))
Why this is returning like this ?
Real or Float are not precise...
Even if you see the value as "20.05", even if you type it in like this, there will be tiny differences.
Your value 2004.9999 (or similar something like 2005.00001) is due to the internal representation of this type.
If you do the conversion to decimal first, it should work as expected:
select cast(cast(20.05 as real) as decimal(9,4))*100.00
But you should really think about, where and why you use floating point numbers...
UPDATE: Format-function
With SQL-Server 2012+ you might use FORMAT() function:
SELECT FORMAT(CAST(20.05 AS REAL)*100,'###.0.000')
This will allow you, to sepcify the format, and you will get text back.
This is fine for presentation output (lists, reports), but not so fine, if you want to continue with some kinds of calculations.

Detect / Determine data stored in a varbinary field

I have several tables a varbinary column in a database. They have names like CSB_BLOB or OBJECT_BLOB. Now I am having intermittent success with getting the data out.
For example this query returns readable text from this data. GREAT!
0x46726F6D3A20226465616E6E6167726.....etc --data as stored in the column
SELECT CAST(CSB_BLOB AS VARCHAR(MAX)) AS 'Message' FROM OBJECT_BLOB
However this column has the following query results.
0x0001000000FFFFFFFF01000000000000000C....etc. --data as stored in column
--this query returns empty result
SELECT (CSB_BLOB AS VARCHAR(MAX)) AS 'Message' FROM CSB_STATUS_LOG
--this query returns no change???
SELECT CONVERT(VARCHAR(MAX), CONVERT(VARBINARY(MAX), CSB_BLOB, 2), 2) FROM CSB_STATUS_LOG
0001000000FFFFFFFF01000000000000000C....etc
Obviously there is a difference between the two but I am not educated enough to interpret this difference. What do I need to learn / read so I can look at the data in one of these BLOB columns and know how to convert it to something meaningful?
If you can't really know just from looking what are all the different conversion I need to try? That in and of itself seems like an impossible question since just about anything can be converted to binary and stored so....
As far as the different conversion I am not asking you to write my TSQL for me but instead just tell me the common conversions.
Something like:
Try to cast as varchar to see if it is text.
Turn into a byte array and see if it is a jpg
Turn into a byte array and see if it is a pdf
Convert it to hex and then cast as varchar
etc....
Thank You

convert 'null' varchar to decimal

I have a requirement to create some xml structs (to borrow a C-phrase) in sql-server-2005. In order to do this, I change all my values to varchar. The problem arises when I want to make USE of these values, i have to convert them to decimal.
So, my xml code looks like this:
set #result = #result + <VAL>' + coalesce(cast(#val as varchar(20)), '-.11111') + '</VAL>'
this way, if VAL is null, I return a special decimal and I can check for that decimal. The drawback of doing this, is that I can't use coalesce on the other end when I use the value, I have to check if it converted value is equal to 0.
like this:
case when cast(InvestmentReturn.fn_getSTRUCT(...args...).value('results[1]/VAL[1]', 'varchar(40)')as decimal(10,5)) = -.11111
Since performance is unacceptable right now, I thought one way to improve performance might be to use coalesce instead of using a nested case statement and checking the value for equality with my special 'null' equivalent.
Any thoughts?
also, i see that select cast('null' as decimal(10,5)) gives me:
Msg 8114, Level 16, State 5, Line 1
Error converting data type varchar to numeric.
Performance issues can be caused by a number of factors.
The first one is using XML in sql 2005. I don't know the size of the xml data you are using but when I tried this 5 years ago if you crossed a certain size barrier (I think it was 32k, might have been 64k) then processing performance fell off the cliff. 1 extra byte would cause a query to go from 500ms to 60 seconds. We had to abandon letting SQL server deal with XML data itself at that point. It was MUCH faster to do that processing in C#.
The second one is making calls to functions inside a select statement. If that function has to operate on multiple rows, then performance goes down. One example I always use to illustrate this is GETDATE(). If you set a variable to the return of GETDATE() and then use that variable in a select query it will run an order of magnitude faster than calling GETDATE() in the query itself. The little code example you provided could be a killer just because it's calling a function.
This may not be a good answer to your immediate problem, but I really believe you would be much better served yanking any XML processing code out of SQL server and doing it in ANY OTHER language of your choice.