Arithmetic operation with numeric datatype in SQL server yields different results - sql

I get different results when using real and numeric data type.
When I use real as datatype I get finalValue as -139.2466, when I use numeric datatype I get finalVaue as --139.246409. Which value is correct?
When I plug these numbers in Excel, it matches to value -139.2466.
For .eg
create table #resr ( a1 real, a2 real, a3 real)
insert #resr select 0.471163361822717, 0.0096160000 , 0.001669000000000
select a1*a2*-51.295/a3 finalValue from #resr
create table #resn ( a1 numeric(30,15), a2 numeric(30,15), a3 numeric(30,15))
insert #resn select 0.471163361822717, 0.0096160000 , 0.001669000000000
select a1*a2*-51.295/a3 finalValue from #resn

Floating point data types (of which REAL is a member) are approximate values, and can use any of a number of algorithms to encode the sequence of number, causing minute differences in how they're interpreted in SQL. This is the reason you can have a single float(10) value of 1234567890 and .1234567890
select cast(1234567890 as float(10))
select cast(.1234567890 as float(10))
Exact values (such as Decimal and Numeric) define exactly how many decimal places are allowed, and fills in zeroes for any out to as many as have been defined.
Floats give you the ability to model a wider range of numbers since you can allow extremely large numbers and extremely small numbers by allowing the decimal point to "float" rather than be a fixed point in memory. They're also fine in most cases as usually the decimal precision you lose isn't a big deal. They also tend to be smaller than precise data types (not always). However, if you know the size of the values you're expecting ahead of time, it's usually best to use a decimal.
Which value is "correct"? The numeric value. If you're ever comparing a floating point representation of a number vs an exact representation, go with the exact representation.

Related

Shouldn't binary_double store a higher value than number in Oracle?

Considering the following test code :
CREATE TABLE binary_test (bin_float BINARY_FLOAT, bin_double BINARY_DOUBLE, NUM NUMBER);
INSERT INTO binary_test VALUES (4356267548.32345E+100, 4356267548.32345E+2+300, 4356267548.32345E+100);
SELECT CASE WHEN bin_double>to_binary_double(num) THEN 'Greater'
WHEN bin_double=to_binary_double(num) THEN 'Equal'
WHEN bin_double<to_binary_double(num) THEN 'Lower'
ELSE 'Unknown' END comparison,
A.*
FROM binary_test A;
I've tried to see which one stores higher values. If I try to add E+300 for the number and binary_float columns, it returns numeric overflow error. So, I thought I could store a greater value with the binary_float.
However, when I tried to check it, it shows a lower value, and with the case comparison it says it is lower too. Could you please elaborate this situation?
You are inserting the value 4356267548.32345E+2+300 into the binary double column. That evaluates to 4356267548.32345E+2, which is 435626754832.345, plus 300 - which is 435626755132.345 (or 4.35626755132345E+011, which becomes 4.3562675513234497E+011 when converted to binary double). That is clearly lower than 4356267548.32345E+100 (or 4.35626754832345E+109, which becomes 4.3562675483234496E+109 when converted to binary double).
Not directly relevant, but you should also be aware that you're providing a decimal number literal, which will be implicitly converted to binary double during insert. So you can't use 4356267548.32345E+300, as that is too large for the number data type. If you want to specify a binary double literal then you need to append a d to it, i.e. 4356267548.32345E+300d; but that is still too large.
The highest you can go with that numeric part is 4356267548.32345E+298d, which evaluates to 4.3562675483234498E+307 - just below the data type limit of 1.79769313486231E+308; and note the loss of precision.
db<>fiddle

How Can I Get An Exact Character Representation of a Float in SQL Server?

We are doing some validation of data which has been migrated from one SQL Server to another SQL Server. One of the things that we are validating is that some numeric data has been transferred properly. The numeric data is stored as a float datatype in the new system.
We are aware that there are a number of issues with float datatypes, that exact numeric accuracy is not guaranteed, and that one cannot use exact equality comparisons with float data. We don't have control over the database schemas nor data typing and those are separate issues.
What we are trying to do in this specific case is verify that some ratio values were transferred properly. One of the specific data validation rules is that all ratios should be transferred with no more than 4 digits to the right of the decimal point.
So, for example, valid ratios would look like:
.7542
1.5423
Invalid ratios would be:
.12399794301
12.1209377
What we would like to do is count the number of digits to the right of the decimal point and find all cases where the float values have more than four digits to the right of it. We've been using the SUBSTRING, LEN, STR, and a couple of other functions to achieve this, and I am sure it would work if we had numeric fields typed as decimal which we were casting to char.
However, what we have found when attempting to convert a float to a char value is that SQL Server seems to always convert to decimal in between. For example, the field in question shows this value when queried in SQL Server Enterprise Manager:
1.4667
Attempting to convert to a string using the recommended function for SQL Server:
LTRIM(RTRIM(STR(field_name, 22, 17)))
Returns this value:
1.4666999999999999
The value which I would expect if SQL Server were directly converting from float to char (which we could then trim trailing zeroes from):
1.4667000000000000
Is there any way in SQL Server to convert directly from a float to a char without going through what appears to be an intermediate conversion to decimal along the way? We also tried the CAST and CONVERT functions and received similar results to the STR function.
SQL Server Version involved: SQL Server 2012 SP2
Thank you.
Your validation rule seems to be misguided.
An SQL Server FLOAT, or FLOAT(53), is stored internally as a 64-bit floating-point number according to the IEEE 754 standard, with 53 bits of mantissa ("value") plus an exponent. Those 53 binary digits correspond to approximately 15 decimal digits.
Floating-point numbers have limited precision, which does not mean that they are "fuzzy" or inexact in themselves, but that not all numbers can be exactly represented, and instead have to be represented using another number.
For example, there is no exact representation for your 1.4667, and it will instead be stored as a binary floating-point number that (exactly) corresponds to the decimal number 1.466699999999999892708046900224871933460235595703125. Correctly rounded to 16 decimal places, that is 1.4666999999999999, which is precisely what you got.
Since the "exact character representation of the float value that is in SQL Server" is 1.466699999999999892708046900224871933460235595703125, the validation rule of "no more than 4 digits to the right of the decimal point" is clearly flawed, at least if you apply it to the "exact character representation".
What you might be able to do, however, is to round the stored number to fewer decimal places, so that the small error at the end of the decimals is hidden. Converting to a character representation rounded to 15 instead of 16 places (remember those "15 decimal digits" mentioned at the beginning?) will give you 1.466700000000000, and then you can check that all decimals after the first four are zeroes.
You can try using cast to varchar.
select case when
len(
substring(cast(col as varchar(100))
,charindex('.',cast(col as varchar(100)))+1
,len(cast(col as varchar(100)))
)
) = 4
then 'true' else 'false' end
from tablename
where charindex('.',cast(col as varchar(100))) > 0
For this particular number, don't use STR(), and use a convert or cast to varchar. But, in general, you will always have precision issues when storing in float... it's the nature of the storage of that datatype. The best you can do is normalize to a NUMERIC type and compare with threshold ranges (+/- .0001, for example). See the following for a breakdown of how the different conversions work:
declare #float float = 1.4667
select #float,
convert(numeric(18,4), #float),
convert(nvarchar(20), #float),
convert(nvarchar(20), convert(numeric(18,4), #float)),
str(#float, 22, 17),
str(convert(numeric(18,4), #float)),
convert(nvarchar(20), convert(numeric(18,4), #float))
Instead of casting to a VarChar you might try this: cast to a decimal with 4 fractional digits and check if it's the same value as before.
case when field_name <> convert(numeric(38,4), field_name)
then 1
else 0
end
The issue you have here is that float is an approximate number data type with an accuracy of about seven digits. That means it approaches the value while using less storage than a decimal / numeric. That's why you don't use float for values that require exact precision.
Check this example:
DECLARE #t TABLE (
col FLOAT
)
INSERT into #t (col)
VALUES (1.4666999999999999)
,(1.4667)
,(1.12399794301)
,(12.1209377);
SELECT col
, CONVERT(NVARCHAR(MAX),col) AS chr
, CAST(col as VARBINARY) AS bin
, LTRIM(RTRIM(STR(col, 22, 17))) AS rec
FROM #t
As you see the float 1.4666999999999999 binary equals 1.4667. For your stated needs I think this query would fit:
SELECT col
, RIGHT(CONVERT(NVARCHAR(MAX),col), LEN(CONVERT(NVARCHAR(MAX),col)) - CHARINDEX('.',CONVERT(NVARCHAR(MAX),col))) AS prec
from #t

Value of real type incorrectly compares

I have field of REAL type in db. I use PostgreSQL. And the query
SELECT * FROM my_table WHERE my_field = 0.15
does not return rows in which the value of my_field is 0.15.
But for instance the query
SELECT * FROM my_table WHERE my_field > 0.15
works properly.
How can I solve this problem and get the rows with my_field = 0.15 ?
To solve your problem use the data type numeric instead, which is not a floating point type, but an arbitrary precision type.
If you enter the numeric literal 0.15 into a numeric (same word, different meaning) column, the exact amount is stored - unlike with a real or float8 column, where the value is coerced to next possible binary approximation. This may or may not be exact, depending on the number and implementation details. The decimal number 0.15 happens to fall between possible binary representations and is stored with a tiny error.
Note that the result of a calculation can be inexact itself, so be still wary of the = operator in such cases.
It also depends how you test. When comparing, Postgres coerces diverging numeric types to a type that can best hold the result.
Consider this demo:
CREATE TABLE t(num_r real, num_n numeric);
INSERT INTO t VALUES (0.15, 0.15);
SELECT num_r, num_n
, num_r = num_n AS test1 --> FALSE
, num_r = num_n::real AS test2 --> TRUE
, num_r - num_n AS result_nonzero --> float8
, num_r - num_n::real AS result_zero --> real
FROM t;
db<>fiddle here
Old sqlfiddle
Therefore, if you have entered 0.15 as numeric literal into your column of data type real, you can find all such rows with:
SELECT * FROM my_table WHERE my_field = real '0.15'
Use numeric columns if you need to store fractional digits exactly.
Your problem originates from IEEE 754.
0.15 is not 0.15, but 0.15000000596046448 (assuming double precision), as it can not be exactly represented as a binary floating point number.
(check this calculator)
Why is this a problem? In this case, most likely because the other side of the comparison uses the exact value 0.15 - through an exact representation, like a numeric type. (Cleared up on suggestion by Eric)
So there are two ways:
use a format that actually stores the numbers in decimal format - as Erwin suggested
(or at least use the same type across the board)
use rounding as Jack suggested - which has to be used carefully (by the way this uses a numeric type too, to exactly represent 0.15...)
Recommended reading:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
(Sorry for the terse answer...)
Well, I can't see your data, but I'm guessing that my_field doesn't exactly equal 0.15. Try:
select * from my_table where round(my_field::numeric,2) = 0.15;
Considering both PPTerka's and Jack's answer.
Approximate numeric data types do not store the exact values specified for many numbers;
Look here for MS' decription of real values.
http://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx

SQL server 'like' against a float field produces inconsistent results

I am using LIKE to return matching numeric results against a float field. It seems that once there are more than 4 digits to the left of the decimal, values that match my search item on the right side of the decimal are not returned. Here's an example illustrating the situation:
CREATE TABLE number_like_test (
num [FLOAT] NULL
)
INSERT INTO number_like_test (num) VALUES (1234.56)
INSERT INTO number_like_test (num) VALUES (3457.68)
INSERT INTO number_like_test (num) VALUES (13457.68)
INSERT INTO number_like_test (num) VALUES (1234.76)
INSERT INTO number_like_test (num) VALUES (23456.78)
SELECT num FROM number_like_test
WHERE num LIKE '%68%'
That query does not return the record with the value of 12357.68, but it does return the record with the value of 3457.68. Also running the query with 78 instead of 68 does not return the 23456.78 record, but using 76 returns the 1234.76 record.
So to get to the question: why having a larger number causes these results to change? How can I change my query to get the expected results?
The like operator requires a string as a left-hand value. According to the documentation, a conversion from float to varchar can use several styles:
Value Output
0 (default) A maximum of 6 digits. Use in scientific notation, when appropriate.
1 Always 8 digits. Always use in scientific notation.
2 Always 16 digits. Always use in scientific notation.
The default style will work fine for the six digits in 3457.68, but not for the seven digits in 13457.68. To use 16 digits instead of 6, you could use convert and specify style 2. Style 2 represents a number like 3.457680000000000e+003. But that wouldn't work for the first two digits, and you get an unexpected +003 exponent for free.
The best approach is probably a conversion from float to decimal. That conversion allows you to specify the scale and precision. Using scale 20 and precision 10, the float is represented as 3457.6800000000:
where convert(decimal(20,10), num) like '%68%'
When you are comparing number with LIKE it is implicitly converted to string and then matched
The problem here is that float number is not precise and when it is converted you can get
13457.679999999999999 instead of 13457.68
So to avid this explicitly format number in appropriate format(not sure how to do this in sql server, but it will be something like)
SELECT num FROM number_like_test
WHERE Format("0.##",num) LIKE '%68%'
The conversion to string is rounding your values. Both CONVERT and CAST have the same behavior.
SELECT cast(num as nvarchar(50)) as s
FROM number_like_test
Or
SELECT convert(nvarchar(50), num) as s
FROM number_like_test
provide the results:
1234.56
3457.68
13457.7
1234.76
23456.8
You'll have to use the STR function and correct format parameters to try to get your results. For example,
SELECT STR(num, 10, 2) as s
FROM number_like_test
gives:
1234.56
3457.68
13457.68
1234.76
23456.78
Pretty well solved already, but you only need to CAST once, not twice like the other answer suggests, LIKE takes care of the string conversion:
SELECT *
FROM number_like_test
WHERE CAST(num AS DECIMAL(12,6)) LIKE '%68%'
And here's a SQL Fiddle showing the rounding behavior: SQL Fiddle
It's probably because a FLOAT data type represents a floating point number which is an approximation of the number and should not be relied on for exact comparisons.
If you need to do a search that includes the float value you would need to either store it in a decimal data type (which will hold the exact number) or convert it to a varchar using something like the STR() function

How do I count decimal places in SQL?

I have a column X which is full of floats with decimals places ranging from 0 (no decimals) to 6 (maximum). I can count on the fact that there are no floats with greater than 6 decimal places. Given that, how do I make a new column such that it tells me how many digits come after the decimal?
I have seen some threads suggesting that I use CAST to convert the float to a string, then parse the string to count the length of the string that comes after the decimal. Is this the best way to go?
You can use something like this:
declare #v sql_variant
set #v=0.1242311
select SQL_VARIANT_PROPERTY(#v, 'Scale') as Scale
This will return 7.
I tried to make the above query work with a float column but couldn't get it working as expected. It only works with a sql_variant column as you can see here: http://sqlfiddle.com/#!6/5c62c/2
So, I proceeded to find another way and building upon this answer, I got this:
SELECT value,
LEN(
CAST(
CAST(
REVERSE(
CONVERT(VARCHAR(50), value, 128)
) AS float
) AS bigint
)
) as Decimals
FROM Numbers
Here's a SQL Fiddle to test this out: http://sqlfiddle.com/#!6/23d4f/29
To account for that little quirk, here's a modified version that will handle the case when the float value has no decimal part:
SELECT value,
Decimals = CASE Charindex('.', value)
WHEN 0 THEN 0
ELSE
Len (
Cast(
Cast(
Reverse(CONVERT(VARCHAR(50), value, 128)) AS FLOAT
) AS BIGINT
)
)
END
FROM numbers
Here's the accompanying SQL Fiddle: http://sqlfiddle.com/#!6/10d54/11
This thread is also using CAST, but I found the answer interesting:
http://www.sqlservercentral.com/Forums/Topic314390-8-1.aspx
DECLARE #Places INT
SELECT TOP 1000000 #Places = FLOOR(LOG10(REVERSE(ABS(SomeNumber)+1)))+1
FROM dbo.BigTest
and in ORACLE:
SELECT FLOOR(LOG(10,REVERSE(CAST(ABS(.56544)+1 as varchar(50))))) + 1 from DUAL
A float is just representing a real number. There is no meaning to the number of decimal places of a real number. In particular the real number 3 can have six decimal places, 3.000000, it's just that all the decimal places are zero.
You may have a display conversion which is not showing the right most zero values in the decimal.
Note also that the reason there is a maximum of 6 decimal places is that the seventh is imprecise, so the display conversion will not commit to a seventh decimal place value.
Also note that floats are stored in binary, and they actually have binary places to the right of a binary point. The decimal display is an approximation of the binary rational in the float storage which is in turn an approximation of a real number.
So the point is, there really is no sense of how many decimal places a float value has. If you do the conversion to a string (say using the CAST) you could count the decimal places. That really would be the best approach for what you are trying to do.
I answered this before, but I can tell from the comments that it's a little unclear. Over time I found a better way to express this.
Consider pi as
(a) 3.141592653590
This shows pi as 11 decimal places. However this was rounded to 12 decimal places, as pi, to 14 digits is
(b) 3.1415926535897932
A computer or database stores values in binary. For a single precision float, pi would be stored as
(c) 3.141592739105224609375
This is actually rounded up to the closest value that a single precision can store, just as we rounded in (a). The next lowest number a single precision can store is
(d) 3.141592502593994140625
So, when you are trying to count the number of decimal places, you are trying to find how many decimal places, after which all remaining decimals would be zero. However, since the number may need to be rounded to store it, it does not represent the correct value.
Numbers also introduce rounding error as mathematical operations are done, including converting from decimal to binary when inputting the number, and converting from binary to decimal when displaying the value.
You cannot reliably find the number of decimal places a number in a database has, because it is approximated to round it to store in a limited amount of storage. The difference between the real value, or even the exact binary value in the database will be rounded to represent it in decimal. There could always be more decimal digits which are missing from rounding, so you don't know when the zeros would have no more non-zero digits following it.
Solution for Oracle but you got the idea. trunc() removes decimal part in Oracle.
select *
from your_table
where (your_field*1000000 - trunc(your_field*1000000)) <> 0;
The idea of the query: Will there be any decimals left after you multiply by 1 000 000.
Another way I found is
SELECT 1.110000 , LEN(PARSENAME(Cast(1.110000 as float),1)) AS Count_AFTER_DECIMAL
I've noticed that Kshitij Manvelikar's answer has a bug. If there are no decimal places, instead of returning 0, it returns the total number of characters in the number.
So improving upon it:
Case When (SomeNumber = Cast(SomeNumber As Integer)) Then 0 Else LEN(PARSENAME(Cast(SomeNumber as float),1)) End
Here's another Oracle example. As I always warn non-Oracle users before they start screaming at me and downvoting etc... the SUBSTRING and INSTRING are ANSI SQL standard functions and can be used in any SQL. The Dual table can be replaced with any other table or created. Here's the link to SQL SERVER blog whre i copied dual table code from: http://blog.sqlauthority.com/2010/07/20/sql-server-select-from-dual-dual-equivalent/
CREATE TABLE DUAL
(
DUMMY VARCHAR(1)
)
GO
INSERT INTO DUAL (DUMMY)
VALUES ('X')
GO
The length after dot or decimal place is returned by this query.
The str can be converted to_number(str) if required. You can also get the length of the string before dot-decimal place - change code to LENGTH(SUBSTR(str, 1, dot_pos))-1 and remove +1 in INSTR part:
SELECT str, LENGTH(SUBSTR(str, dot_pos)) str_length_after_dot FROM
(
SELECT '000.000789' as str
, INSTR('000.000789', '.')+1 dot_pos
FROM dual
)
/
SQL>
STR STR_LENGTH_AFTER_DOT
----------------------------------
000.000789 6
You already have answers and examples about casting etc...
This question asks of regular SQL, but I needed a solution for SQLite. SQLite has neither a log10 function, nor a reverse string function builtin, so most of the answers here don't work. My solution is similar to Art's answer, and as a matter of fact, similar to what phan describes in the question body. It works by converting the floating point value (in SQLite, a "REAL" value) to text, and then counting the caracters after a decimal point.
For a column named "Column" from a table named "Table", the following query will produce a the count of each row's decimal places:
select
length(
substr(
cast(Column as text),
instr(cast(Column as text), '.')+1
)
) as "Column-precision" from "Table";
The code will cast the column as text, then get the index of a period (.) in the text, and fetch the substring from that point on to the end of the text. Then, it calculates the length of the result.
Remember to limit 100 if you don't want it to run for the entire table!
It's not a perfect solution; for example, it considers "10.0" as having 1 decimal place, even if it's only a 0. However, this is actually what I needed, so it wasn't a concern to me.
Hopefully this is useful to someone :)
Probably doesn't work well for floats, but I used this approach as a quick and dirty way to find number of significant decimal places in a decimal type in SQL Server. Last parameter of round function if not 0 indicates to truncate rather than round.
CASE
WHEN col = round(col, 1, 1) THEN 1
WHEN col = round(col, 2, 1) THEN 2
WHEN col = round(col, 3, 1) THEN 3
...
ELSE null END