I have a table in which one of field is Real data type. I need to show the values in decimal format like #.###. So i'm converting the real values to decimal. But when i convert for some values it is not generating actual value. For eg:- 20.05 is the actual value. multiple it by 100 and then it to decimal(9,4) it will return like 2004.9999.
select cast(cast(20.05 as real)*100.00 as decimal(9,4))
Why this is returning like this ?
Real or Float are not precise...
Even if you see the value as "20.05", even if you type it in like this, there will be tiny differences.
Your value 2004.9999 (or similar something like 2005.00001) is due to the internal representation of this type.
If you do the conversion to decimal first, it should work as expected:
select cast(cast(20.05 as real) as decimal(9,4))*100.00
But you should really think about, where and why you use floating point numbers...
UPDATE: Format-function
With SQL-Server 2012+ you might use FORMAT() function:
SELECT FORMAT(CAST(20.05 AS REAL)*100,'###.0.000')
This will allow you, to sepcify the format, and you will get text back.
This is fine for presentation output (lists, reports), but not so fine, if you want to continue with some kinds of calculations.
Related
I'm pulling in some external data into my MSSQL server. Several columns of incoming data are marked as 'number' (it's a json file). It's millions of rows in size and many of the columns appear to be decimal (18,2) like 23.33. But I can't be sure that it will always be like that, in fact a few have been 23.333 or longer numbers like 23.35555555 which will mess up my import.
So my question is given a column is going to have some kind of number imported into it, but I can't be sure really how big or how many decimal places it's going to have... do I have to resort to making my column a varchar or is there a very generic number kind of column I'm not thinking of?
Is there a max size decimal, sort of like using VARCHAR(8000) or VARCHAR(MAX) ?
update
This is the 'data type' of number that I'm pulling in:
https://dev.socrata.com/docs/datatypes/number.html#
Looks like it can be pretty much any number, as per their writing:
"Numbers are arbitrary precision, arbitrary scale numbers."
The way I handle things like this is to import the raw data into a staging table in a varchar(max) column.
Then I use TRY_PARSE() or TRY_CONVERT() when moving it to the desired datatype in my final destination table.
The point here is that the shape of the incoming data shouldn't determine the datatype you use. The datatype should be determined by the usage of the data once it's in your table. And if the incoming data doesn't fit, there are ways of making it fit.
What do those numbers represent? If they are just values to show you could just set float as datatype and you're good to go.
But if they are coordinates or currencies or anything you need for absolute precise calculations float might sometimes give rounding problems. Then you should set your desired minimal precision with decimal and simply truncate what's eventually over.
For instance if most of the numbers have two decimals, you could go with 3 or 4 decimal points to be sure, but over that it will be cut.
I recently came across a weird case in an ETL process where the results seem unpredictable to me. I read Difference between numeric, float and decimal in SQL Server, but I don't think it's an overflow or decimal precision issue.
Scenario:
Source table "test" in SQL Server 2008 SP3, column a declared as numeric (38,6).
The result is cast first to real, and then to int. The issue doesn't occur if there is a direct cast from numeric to int.
Results of:
SELECT a,CAST(a as real) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
a: 778881838.810000
real_a: 7.78819E+08
int_a: 778881856
The same experiment, run in SQL Server 2017 (sql fiddle) gives this:
http://sqlfiddle.com/#!18/45aca/2
a: 778881838.81
real_a: 778881860
int_a: 778881856
I can (vaguely) understand the ..19E+08 case, but why is there a +18 difference in the double conversion case? The number seems completely arbitrary to me.
OK, first of all, the result in SQL Server 2017 for real_a is not 778881860. It is 778881856, exactly, just as in SQL Server 2008. How this floating-point value is presented by the client is another matter -- Management Studio shows me 7.788819E+08, sqlcmd produces 7.7888186E+8, and apparently SQL Fiddle uses another library altogether (one I would personally have issue with, seeing as how it obscures significant figures!)
This value is not arbitrary. REAL is a single-precision floating point type that cannot represent 778881838.81 exactly. The closest representable value is 778881856, hence your result (the next lower representable value is 778881792). Without casting to INT, you can see this value using
SELECT STR(CONVERT(REAL, CONVERT(NUMERIC(38, 6), 778881838.810000)), 40, 16)
778881856.0000000000000000
Your use of the term "double" makes me think you're confusing this with FLOAT, which is the double-precision floating point type. FLOAT cannot represent this value exactly either, but it comes much closer:
SELECT STR(CONVERT(FLOAT, CONVERT(NUMERIC(38, 6), 778881838.810000)), 40, 16)
778881838.8099999400000000
Converting this value to an INT yields the (truncated) 778881838. (This truncation is documented and does not happen for conversions to NUMERIC; you'll need to ROUND first before converting if you'd prefer 778881839 instead.)
Easy example for other people that want to test locally:
DECLARE #test numeric (38,6)='778881838.810000'
SELECT #test as [Original],CAST(#test as real) as real_a,CAST(CAST(#test as real) as int) as int_a;
Original real_a int_a
778881838.810000 7.788819E+08 778881856
You would likely need someone from Microsoft to explain the way it works inside the SQL engine (and certainly to know why they made that decision), but I'll take a stab at the reasoning:
If the output is in scientific notation on the first cast and is then needed to cast to an int, it sets the int to the minimum value that would result in that scientific notation. It ends in 6 instead of 5 because rounding on 5 does not consistently round up on all cases (Alternating tie-breaking for example).
But, no matter the reason, if precision is important, you should explicitly cast to a numeric data type with a defined precision.
When you want to convert from float or real to character data, using the STR string function is usually more useful than CAST( ). This is because STR enables more control over formatting. For more information, see STR (Transact-SQL) and Functions (Transact-SQL).
Please find the below links
USE STR Instead of real
STR example
Use the below query : -
SELECT a,STR(a ,38,6) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
Please let me know if you find any issue.
Im writing a stored procedure that will convert float columns to varchar as part of its process when returning the data.
i dont want to convert everything to varchar(max) bec i think its probably more efficient not to. what is the largest size varchar i need to use -
convert(NVARCHAR(????), floatcolumn)
100?
i want to make sure i never get a result that looks like 8397Xe10
Presumably, you are using SQL Server (based on the code in your question). If you don't want exponential notation, then use the str() function (documented here). The length of the string doesn't have a large impact on performance, but you can do something like:
select str(floatcolumn, 100, 8) -- or whatever you think reasonable bounds are for your data
I am trying to replicate tables from a remote SQL 2000 database into my local SQL 2012 instance.
As a quick way of checking for values which have changed, I am using the "UNION ALL...GROUP BY" technique found on Simple Talk (scroll about half-way down).
Unfortunately, the remote data types are set as REAL and as this is an approximate data type this is not very reliable as it finds differences where I don't want it to (even though those differences exist computationally).
I have tried using CONVERT to change the values to a NUMERIC (exact) data type. However, different columns have different numbers of decimal places and finding a one size fits all solution is proving difficult.
One thing I noticed is that if I run the following query (TimeID is an INT and Value1 is a REAL):
SELECT [TimeID], [Value1], CONVERT(DECIMAL(19,10), [Value1]) AS [CONV19,10], CONVERT(DECIMAL(19,3), [Value1]) AS [CONV19,3], CONVERT(DECIMAL(19,4), [Value1]) AS [CONV19,4]
FROM [DATABASE].[SCHEMA].[TABLE]
WHERE [TimeID] = 12345
I get the following results:
[TimeID] [Value1] [CONV19,10] [CONV19,3] [CONV19,4]
12345 1126.089 1126.0885009766 1126.089 1126.0885
Note that SQL Server Management Studio displays Value1 to 3 decimal places when in its native format (i.e. without me converting it).
So my question is: how does SSMS know that it should be displayed to 3 decimal places? How does it know that 1126.0885 is not the actual number stored, but instead is 1126.089?
Ideally I'd like to understand it's algorithm so I can replicate it to convert my data to the correct number of decimal places.
This won't answer your question but will give you a starting point to answer it yourself?
First read this:
http://msdn.microsoft.com/en-us/library/ms187912.aspx
Notably, "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types."
Now read this:
http://en.wikipedia.org/wiki/IEEE_floating_point
So now you should know how float/ real numbers are stored and why they are "approximate" numbers.
As for how SSMS "knows" how many decimals are in a real/ float I don't really know, but it is going to be something to do with the IEEE 754 specification?
A simple script to demonstrate this is:
DECLARE #MyNumber FLOAT(24) = 1.2345;
SELECT #MyNumber, CONVERT(NUMERIC(19, 4), #MyNumber), CONVERT(NUMERIC(19, 10), #MyNumber), CONVERT(NUMERIC(19, 14), #MyNumber);
I don't know if this is the case, but I suspect that SSMS is using .NET numeric string formatting.
I was having a similar situation, I simply wanted to SELECT into a VARCHAR the exact same thing that SSMS was displaying in the query results grid.
In the end I got what I wanted with the FORMAT function, using the General format specifier.
For example:
DECLARE #Floats TABLE([FloatColumn] FLOAT);
INSERT INTO #Floats
VALUES
(123.4567),
(1.23E-7),
(PI());
SELECT
Number = f.FloatColumn,
Text = FORMAT(f.FloatColumn, 'G'),
IncorrectText = CONVERT(NVARCHAR(50), f.FloatColumn)
FROM #Floats f;
I have to give the disclaimer that I don't know if this will work as desired in all cases, but it worked for everything I needed it to.
I'm sure this is very useful after six years.
I am working on a legacy ASP application. I am attempting to insert a value (40.33) into a field in SQL Server 2000 that happens to be a float type. Every place I can see (via some logging) in the application is sending 40.33 to the Stored Procedure. When I run SQL Profiler against the database while the call is happening, the value that I see in the trace is 4.033000183105469e+001
Where is all the extra garbage coming from (the 183105469)?
Why is it that when I pass in 40, or 40.25 there is nothing extra?
Is this just one of the weird side effects of using float? When I am writing something I normally use money or decimal or something else, so not that familiar with the float datatype.
Yes, this is a weird, although well-known, side effect of using FLOAT.
In Microsoft SQL Server, you should use exact numeric datatypes such as NUMERIC, DECIMAL, MONEY or SMALLMONEY if you need exact numerics with scale.
Do not use FLOAT.
I think this is probably just a precision issue - the 0.33 part of the number can't be represented exactly in binary - this is probably the closest that you can get to.
The problem is that floats are not 100% accurate. If you need your numbers to be exact (especially when dealing with monetary values)... you should use a Decimal type.