Shouldn't binary_double store a higher value than number in Oracle? - sql

Considering the following test code :
CREATE TABLE binary_test (bin_float BINARY_FLOAT, bin_double BINARY_DOUBLE, NUM NUMBER);
INSERT INTO binary_test VALUES (4356267548.32345E+100, 4356267548.32345E+2+300, 4356267548.32345E+100);
SELECT CASE WHEN bin_double>to_binary_double(num) THEN 'Greater'
WHEN bin_double=to_binary_double(num) THEN 'Equal'
WHEN bin_double<to_binary_double(num) THEN 'Lower'
ELSE 'Unknown' END comparison,
A.*
FROM binary_test A;
I've tried to see which one stores higher values. If I try to add E+300 for the number and binary_float columns, it returns numeric overflow error. So, I thought I could store a greater value with the binary_float.
However, when I tried to check it, it shows a lower value, and with the case comparison it says it is lower too. Could you please elaborate this situation?

You are inserting the value 4356267548.32345E+2+300 into the binary double column. That evaluates to 4356267548.32345E+2, which is 435626754832.345, plus 300 - which is 435626755132.345 (or 4.35626755132345E+011, which becomes 4.3562675513234497E+011 when converted to binary double). That is clearly lower than 4356267548.32345E+100 (or 4.35626754832345E+109, which becomes 4.3562675483234496E+109 when converted to binary double).
Not directly relevant, but you should also be aware that you're providing a decimal number literal, which will be implicitly converted to binary double during insert. So you can't use 4356267548.32345E+300, as that is too large for the number data type. If you want to specify a binary double literal then you need to append a d to it, i.e. 4356267548.32345E+300d; but that is still too large.
The highest you can go with that numeric part is 4356267548.32345E+298d, which evaluates to 4.3562675483234498E+307 - just below the data type limit of 1.79769313486231E+308; and note the loss of precision.
db<>fiddle

Related

MariaDB comparison between char and number

I am so confused by the following query
SELECT
'6217001180007179362' = 6217001180007179156 -- 1
, 6217001180007179362 = 6217001180007179156 ; -- 0
how come 「'6217001180007179362' = 6217001180007179156」 is 1 ?
my maria DB verision is 10.2.11-MariaDB.
When you enter a series of digits in MariaDB/MySQL, it is interpreted as a numeric constant. In your case, this does what you want.
When you enter a series of digits surrounded by single quotes, then it is interpreted as a string.
When you compare two numeric constants -- well, there isn't a problem. The solution is what you expect.
However, if one is a string, then the values are implicitly converted. They fall under this condition, as described in the documentation:
In all other cases, the arguments are compared as floating-point (real) numbers.
Your values have more precision than can be represented in a float, so they values look equal to MySQL.

How Can I Get An Exact Character Representation of a Float in SQL Server?

We are doing some validation of data which has been migrated from one SQL Server to another SQL Server. One of the things that we are validating is that some numeric data has been transferred properly. The numeric data is stored as a float datatype in the new system.
We are aware that there are a number of issues with float datatypes, that exact numeric accuracy is not guaranteed, and that one cannot use exact equality comparisons with float data. We don't have control over the database schemas nor data typing and those are separate issues.
What we are trying to do in this specific case is verify that some ratio values were transferred properly. One of the specific data validation rules is that all ratios should be transferred with no more than 4 digits to the right of the decimal point.
So, for example, valid ratios would look like:
.7542
1.5423
Invalid ratios would be:
.12399794301
12.1209377
What we would like to do is count the number of digits to the right of the decimal point and find all cases where the float values have more than four digits to the right of it. We've been using the SUBSTRING, LEN, STR, and a couple of other functions to achieve this, and I am sure it would work if we had numeric fields typed as decimal which we were casting to char.
However, what we have found when attempting to convert a float to a char value is that SQL Server seems to always convert to decimal in between. For example, the field in question shows this value when queried in SQL Server Enterprise Manager:
1.4667
Attempting to convert to a string using the recommended function for SQL Server:
LTRIM(RTRIM(STR(field_name, 22, 17)))
Returns this value:
1.4666999999999999
The value which I would expect if SQL Server were directly converting from float to char (which we could then trim trailing zeroes from):
1.4667000000000000
Is there any way in SQL Server to convert directly from a float to a char without going through what appears to be an intermediate conversion to decimal along the way? We also tried the CAST and CONVERT functions and received similar results to the STR function.
SQL Server Version involved: SQL Server 2012 SP2
Thank you.
Your validation rule seems to be misguided.
An SQL Server FLOAT, or FLOAT(53), is stored internally as a 64-bit floating-point number according to the IEEE 754 standard, with 53 bits of mantissa ("value") plus an exponent. Those 53 binary digits correspond to approximately 15 decimal digits.
Floating-point numbers have limited precision, which does not mean that they are "fuzzy" or inexact in themselves, but that not all numbers can be exactly represented, and instead have to be represented using another number.
For example, there is no exact representation for your 1.4667, and it will instead be stored as a binary floating-point number that (exactly) corresponds to the decimal number 1.466699999999999892708046900224871933460235595703125. Correctly rounded to 16 decimal places, that is 1.4666999999999999, which is precisely what you got.
Since the "exact character representation of the float value that is in SQL Server" is 1.466699999999999892708046900224871933460235595703125, the validation rule of "no more than 4 digits to the right of the decimal point" is clearly flawed, at least if you apply it to the "exact character representation".
What you might be able to do, however, is to round the stored number to fewer decimal places, so that the small error at the end of the decimals is hidden. Converting to a character representation rounded to 15 instead of 16 places (remember those "15 decimal digits" mentioned at the beginning?) will give you 1.466700000000000, and then you can check that all decimals after the first four are zeroes.
You can try using cast to varchar.
select case when
len(
substring(cast(col as varchar(100))
,charindex('.',cast(col as varchar(100)))+1
,len(cast(col as varchar(100)))
)
) = 4
then 'true' else 'false' end
from tablename
where charindex('.',cast(col as varchar(100))) > 0
For this particular number, don't use STR(), and use a convert or cast to varchar. But, in general, you will always have precision issues when storing in float... it's the nature of the storage of that datatype. The best you can do is normalize to a NUMERIC type and compare with threshold ranges (+/- .0001, for example). See the following for a breakdown of how the different conversions work:
declare #float float = 1.4667
select #float,
convert(numeric(18,4), #float),
convert(nvarchar(20), #float),
convert(nvarchar(20), convert(numeric(18,4), #float)),
str(#float, 22, 17),
str(convert(numeric(18,4), #float)),
convert(nvarchar(20), convert(numeric(18,4), #float))
Instead of casting to a VarChar you might try this: cast to a decimal with 4 fractional digits and check if it's the same value as before.
case when field_name <> convert(numeric(38,4), field_name)
then 1
else 0
end
The issue you have here is that float is an approximate number data type with an accuracy of about seven digits. That means it approaches the value while using less storage than a decimal / numeric. That's why you don't use float for values that require exact precision.
Check this example:
DECLARE #t TABLE (
col FLOAT
)
INSERT into #t (col)
VALUES (1.4666999999999999)
,(1.4667)
,(1.12399794301)
,(12.1209377);
SELECT col
, CONVERT(NVARCHAR(MAX),col) AS chr
, CAST(col as VARBINARY) AS bin
, LTRIM(RTRIM(STR(col, 22, 17))) AS rec
FROM #t
As you see the float 1.4666999999999999 binary equals 1.4667. For your stated needs I think this query would fit:
SELECT col
, RIGHT(CONVERT(NVARCHAR(MAX),col), LEN(CONVERT(NVARCHAR(MAX),col)) - CHARINDEX('.',CONVERT(NVARCHAR(MAX),col))) AS prec
from #t

Arithmetic operation with numeric datatype in SQL server yields different results

I get different results when using real and numeric data type.
When I use real as datatype I get finalValue as -139.2466, when I use numeric datatype I get finalVaue as --139.246409. Which value is correct?
When I plug these numbers in Excel, it matches to value -139.2466.
For .eg
create table #resr ( a1 real, a2 real, a3 real)
insert #resr select 0.471163361822717, 0.0096160000 , 0.001669000000000
select a1*a2*-51.295/a3 finalValue from #resr
create table #resn ( a1 numeric(30,15), a2 numeric(30,15), a3 numeric(30,15))
insert #resn select 0.471163361822717, 0.0096160000 , 0.001669000000000
select a1*a2*-51.295/a3 finalValue from #resn
Floating point data types (of which REAL is a member) are approximate values, and can use any of a number of algorithms to encode the sequence of number, causing minute differences in how they're interpreted in SQL. This is the reason you can have a single float(10) value of 1234567890 and .1234567890
select cast(1234567890 as float(10))
select cast(.1234567890 as float(10))
Exact values (such as Decimal and Numeric) define exactly how many decimal places are allowed, and fills in zeroes for any out to as many as have been defined.
Floats give you the ability to model a wider range of numbers since you can allow extremely large numbers and extremely small numbers by allowing the decimal point to "float" rather than be a fixed point in memory. They're also fine in most cases as usually the decimal precision you lose isn't a big deal. They also tend to be smaller than precise data types (not always). However, if you know the size of the values you're expecting ahead of time, it's usually best to use a decimal.
Which value is "correct"? The numeric value. If you're ever comparing a floating point representation of a number vs an exact representation, go with the exact representation.

Value of real type incorrectly compares

I have field of REAL type in db. I use PostgreSQL. And the query
SELECT * FROM my_table WHERE my_field = 0.15
does not return rows in which the value of my_field is 0.15.
But for instance the query
SELECT * FROM my_table WHERE my_field > 0.15
works properly.
How can I solve this problem and get the rows with my_field = 0.15 ?
To solve your problem use the data type numeric instead, which is not a floating point type, but an arbitrary precision type.
If you enter the numeric literal 0.15 into a numeric (same word, different meaning) column, the exact amount is stored - unlike with a real or float8 column, where the value is coerced to next possible binary approximation. This may or may not be exact, depending on the number and implementation details. The decimal number 0.15 happens to fall between possible binary representations and is stored with a tiny error.
Note that the result of a calculation can be inexact itself, so be still wary of the = operator in such cases.
It also depends how you test. When comparing, Postgres coerces diverging numeric types to a type that can best hold the result.
Consider this demo:
CREATE TABLE t(num_r real, num_n numeric);
INSERT INTO t VALUES (0.15, 0.15);
SELECT num_r, num_n
, num_r = num_n AS test1 --> FALSE
, num_r = num_n::real AS test2 --> TRUE
, num_r - num_n AS result_nonzero --> float8
, num_r - num_n::real AS result_zero --> real
FROM t;
db<>fiddle here
Old sqlfiddle
Therefore, if you have entered 0.15 as numeric literal into your column of data type real, you can find all such rows with:
SELECT * FROM my_table WHERE my_field = real '0.15'
Use numeric columns if you need to store fractional digits exactly.
Your problem originates from IEEE 754.
0.15 is not 0.15, but 0.15000000596046448 (assuming double precision), as it can not be exactly represented as a binary floating point number.
(check this calculator)
Why is this a problem? In this case, most likely because the other side of the comparison uses the exact value 0.15 - through an exact representation, like a numeric type. (Cleared up on suggestion by Eric)
So there are two ways:
use a format that actually stores the numbers in decimal format - as Erwin suggested
(or at least use the same type across the board)
use rounding as Jack suggested - which has to be used carefully (by the way this uses a numeric type too, to exactly represent 0.15...)
Recommended reading:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
(Sorry for the terse answer...)
Well, I can't see your data, but I'm guessing that my_field doesn't exactly equal 0.15. Try:
select * from my_table where round(my_field::numeric,2) = 0.15;
Considering both PPTerka's and Jack's answer.
Approximate numeric data types do not store the exact values specified for many numbers;
Look here for MS' decription of real values.
http://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx

Multiplication with NULL and empty column values in SQL

This was my Interview Question
there are two columns called Length and Breadth in Area table
Length Breadth Length*Breadth
20 NULL ?
30 ?
21.2 1 ?
I tried running the same question on MYSQL while inserting,To insert an empty value I tried the below query . Am I missing anything while inserting empty values in MYSQL.
insert into test.new_table values (30,);
Answers: With Null,Result is Null.
With float and int multiplication result is float
As per your question the expected results would be as below.
SELECT LENGTH,BREADTH,LENGTH*BREADTH AS CALC_AREA FROM AREA;
LENGTH BREADTH CALC_AREA
20
30 0 0
21.2 1 21.2
For any(first) record in SQL SERVER if you do computation with NULL the answer would be NULL.
For any(second) record in SQL SERVER, if you do product computation between a non-empty value and an empty value the result would be zero as empty value is treated as zero.
For any(third) record in SQL SERVER, if you do computation between two non-empty data type values the answer would be a NON-EMPTY value.
Check SQL Fiddle for reference - http://sqlfiddle.com/#!3/f250a/1
That blank Breath (second row) cannot happen unless Breath is VARCHAR. Assuming that, the answers will be:
NULL (since NULL times anything is NULL)
Throws error (since an empty string is not a number. In Sql Server, the error is "Error converting data type varchar to numeric.")
21.20 (since in Sql Server, for example, conversion to a numeric type is automatic, so SELECT 21.2 * '1' returns 21.20).
Assuming that Length and Breadth are numerical types of some kind the second record does not contain possible values — Breadth must be either 0 or NULL.
In any event, any mathematical operation in SQL involving a NULL value will return the value NULL, indicating that the expression cannot be evaluated. The answer are NULL, impossible, and 21.2.
The product of any value and NULL is NULL. This is called "NULL propagation" if you want to Google it. To score points in an interview, you might want to mention that NULL isn't a value; it's a special marker.
The fact that the column Breadth has one entry "NULL" and one entry that's blank (on the second row) is misleading. A numeric column that doesn't have a value in a particular row means that row is NULL. So the second column should also show "NULL".
The answer to the third row, 21.2 * 1, depends on the data type of the column "Length*Breadth". If it's a data type like float, double, or numberic(16,2), the answer is 21.2. If it's an integer column (integer, long, etc.), the answer is 21.
A more snarky answer might be "There's no answer. The string "Length*Breadth" isn't a legal SQL column name."
In standard SQL they would all generate errors because you are comparing values (or nulls) of different types:
CAST ( 20 AS FLOAT ) * CAST ( NULL AS INTEGER ) -- mismatched types error
CAST ( '' AS INTEGER ) -- type conversion error
CAST ( AS INTEGER ) -- type conversion error
CAST ( 21.2 AS FLOAT ) * CAST ( 2 AS INTEGER ) -- mismatched types error
On the other hand, most SQL product would implicitly cast values when comparing values (or nulls) of different types according to type precedence e.g. comparing float value to an integer value would in effect cast the integer to float and result in a float. At the product level, the most interesting question is what happens when you compare a null of type integer with a value (or even a null) of type float...
...but, frankly, not terribly interesting. In an interview you are presented with a framework (in the form of questions asked of you) on which to present your knowledge, skills and experience. The 'answer' here is to discuss nulls (e.g. point out that nulls are tricky to define and behave in unintuitive ways, which leads to frequent bugs and a desire to avoid nulls entirely, etc) and whether implicit casting is a good thing.