Appropriate values for -Infinity & Infinity in Postgres - sql

In one of the cases we have to feed values for +infinity & -infinity in Postgres DB?
What should be appropriate value that should be considered.
If there is not 1, then please suggest the best suited.
Thank You

You can actually use +infinity and -infinity for FLOAT4 and FLOAT8 (i.e. float and double precision) data types, and for timestamps.
regress=> SELECT FLOAT8 '+infinity', FLOAT8 '-infinity';
float8 | float8
----------+-----------
Infinity | -Infinity
(1 row)
For other types, either use a separate column, use the minimum/maximum values for the type, or (where logically appropriate) use null.

You can insert the string 'infinity' or '-infinity' into a numeric/float/real/double precision column types. However, this will be out of range error for integer/bigint/smallint.
'+infinity' and 'infinity' and 'Infinity' are equivalent
'-infinity' and '-Infinity' are equivalent
Some example that work ✅ :
INSERT INTO table
(real_column)
VALUES ('-Infinity'::float);
or even
INSERT INTO table
(numeric_column)
VALUES ('infinity'::numeric);
or
INSERT INTO table
(float_column)
VALUES ('+infinity');

add another column name "infinite", 1 means +infinity, -1 means -infinity
when you are checking numbers, check the column first.
this will save you a lot of time thinking about magic numbers or other stuff.

Related

Shouldn't binary_double store a higher value than number in Oracle?

Considering the following test code :
CREATE TABLE binary_test (bin_float BINARY_FLOAT, bin_double BINARY_DOUBLE, NUM NUMBER);
INSERT INTO binary_test VALUES (4356267548.32345E+100, 4356267548.32345E+2+300, 4356267548.32345E+100);
SELECT CASE WHEN bin_double>to_binary_double(num) THEN 'Greater'
WHEN bin_double=to_binary_double(num) THEN 'Equal'
WHEN bin_double<to_binary_double(num) THEN 'Lower'
ELSE 'Unknown' END comparison,
A.*
FROM binary_test A;
I've tried to see which one stores higher values. If I try to add E+300 for the number and binary_float columns, it returns numeric overflow error. So, I thought I could store a greater value with the binary_float.
However, when I tried to check it, it shows a lower value, and with the case comparison it says it is lower too. Could you please elaborate this situation?
You are inserting the value 4356267548.32345E+2+300 into the binary double column. That evaluates to 4356267548.32345E+2, which is 435626754832.345, plus 300 - which is 435626755132.345 (or 4.35626755132345E+011, which becomes 4.3562675513234497E+011 when converted to binary double). That is clearly lower than 4356267548.32345E+100 (or 4.35626754832345E+109, which becomes 4.3562675483234496E+109 when converted to binary double).
Not directly relevant, but you should also be aware that you're providing a decimal number literal, which will be implicitly converted to binary double during insert. So you can't use 4356267548.32345E+300, as that is too large for the number data type. If you want to specify a binary double literal then you need to append a d to it, i.e. 4356267548.32345E+300d; but that is still too large.
The highest you can go with that numeric part is 4356267548.32345E+298d, which evaluates to 4.3562675483234498E+307 - just below the data type limit of 1.79769313486231E+308; and note the loss of precision.
db<>fiddle

SQL: Casting a Float field to NVARCHAR

I have a table with one float field:
CREATE TABLE IMPORTES (IMPORTE FLOAT)
Then I run these instructions:
INSERT INTO IMPORTES (IMPORTE) VALUES (15226.25)
INSERT INTO IMPORTES (IMPORTE) VALUES (9999.25)
INSERT INTO IMPORTES (IMPORTE) VALUES (5226.25)
When I execute SELECT CAST(IMPORTE AS NVARCHAR(40)), the biggest value gets rounded so that 15226.25 becomes 15226.3. Why is that? How can I make the Cast but still get the same value?
According to the documentation here for CAST and CONVERT, for float and real expressions, the style argument to CONVERT defaults to 0, which returns a maximum of 6 digits. Apparently CAST is actually calling CONVERT with this default value "under the hood", so to speak. The only way you are going to get consistent results is if you change the data type of IMPORTE to numeric(18,2), for instance, if you always know that you will only have 2 decimal places.

Value of real type incorrectly compares

I have field of REAL type in db. I use PostgreSQL. And the query
SELECT * FROM my_table WHERE my_field = 0.15
does not return rows in which the value of my_field is 0.15.
But for instance the query
SELECT * FROM my_table WHERE my_field > 0.15
works properly.
How can I solve this problem and get the rows with my_field = 0.15 ?
To solve your problem use the data type numeric instead, which is not a floating point type, but an arbitrary precision type.
If you enter the numeric literal 0.15 into a numeric (same word, different meaning) column, the exact amount is stored - unlike with a real or float8 column, where the value is coerced to next possible binary approximation. This may or may not be exact, depending on the number and implementation details. The decimal number 0.15 happens to fall between possible binary representations and is stored with a tiny error.
Note that the result of a calculation can be inexact itself, so be still wary of the = operator in such cases.
It also depends how you test. When comparing, Postgres coerces diverging numeric types to a type that can best hold the result.
Consider this demo:
CREATE TABLE t(num_r real, num_n numeric);
INSERT INTO t VALUES (0.15, 0.15);
SELECT num_r, num_n
, num_r = num_n AS test1 --> FALSE
, num_r = num_n::real AS test2 --> TRUE
, num_r - num_n AS result_nonzero --> float8
, num_r - num_n::real AS result_zero --> real
FROM t;
db<>fiddle here
Old sqlfiddle
Therefore, if you have entered 0.15 as numeric literal into your column of data type real, you can find all such rows with:
SELECT * FROM my_table WHERE my_field = real '0.15'
Use numeric columns if you need to store fractional digits exactly.
Your problem originates from IEEE 754.
0.15 is not 0.15, but 0.15000000596046448 (assuming double precision), as it can not be exactly represented as a binary floating point number.
(check this calculator)
Why is this a problem? In this case, most likely because the other side of the comparison uses the exact value 0.15 - through an exact representation, like a numeric type. (Cleared up on suggestion by Eric)
So there are two ways:
use a format that actually stores the numbers in decimal format - as Erwin suggested
(or at least use the same type across the board)
use rounding as Jack suggested - which has to be used carefully (by the way this uses a numeric type too, to exactly represent 0.15...)
Recommended reading:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
(Sorry for the terse answer...)
Well, I can't see your data, but I'm guessing that my_field doesn't exactly equal 0.15. Try:
select * from my_table where round(my_field::numeric,2) = 0.15;
Considering both PPTerka's and Jack's answer.
Approximate numeric data types do not store the exact values specified for many numbers;
Look here for MS' decription of real values.
http://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx

SQL server 'like' against a float field produces inconsistent results

I am using LIKE to return matching numeric results against a float field. It seems that once there are more than 4 digits to the left of the decimal, values that match my search item on the right side of the decimal are not returned. Here's an example illustrating the situation:
CREATE TABLE number_like_test (
num [FLOAT] NULL
)
INSERT INTO number_like_test (num) VALUES (1234.56)
INSERT INTO number_like_test (num) VALUES (3457.68)
INSERT INTO number_like_test (num) VALUES (13457.68)
INSERT INTO number_like_test (num) VALUES (1234.76)
INSERT INTO number_like_test (num) VALUES (23456.78)
SELECT num FROM number_like_test
WHERE num LIKE '%68%'
That query does not return the record with the value of 12357.68, but it does return the record with the value of 3457.68. Also running the query with 78 instead of 68 does not return the 23456.78 record, but using 76 returns the 1234.76 record.
So to get to the question: why having a larger number causes these results to change? How can I change my query to get the expected results?
The like operator requires a string as a left-hand value. According to the documentation, a conversion from float to varchar can use several styles:
Value Output
0 (default) A maximum of 6 digits. Use in scientific notation, when appropriate.
1 Always 8 digits. Always use in scientific notation.
2 Always 16 digits. Always use in scientific notation.
The default style will work fine for the six digits in 3457.68, but not for the seven digits in 13457.68. To use 16 digits instead of 6, you could use convert and specify style 2. Style 2 represents a number like 3.457680000000000e+003. But that wouldn't work for the first two digits, and you get an unexpected +003 exponent for free.
The best approach is probably a conversion from float to decimal. That conversion allows you to specify the scale and precision. Using scale 20 and precision 10, the float is represented as 3457.6800000000:
where convert(decimal(20,10), num) like '%68%'
When you are comparing number with LIKE it is implicitly converted to string and then matched
The problem here is that float number is not precise and when it is converted you can get
13457.679999999999999 instead of 13457.68
So to avid this explicitly format number in appropriate format(not sure how to do this in sql server, but it will be something like)
SELECT num FROM number_like_test
WHERE Format("0.##",num) LIKE '%68%'
The conversion to string is rounding your values. Both CONVERT and CAST have the same behavior.
SELECT cast(num as nvarchar(50)) as s
FROM number_like_test
Or
SELECT convert(nvarchar(50), num) as s
FROM number_like_test
provide the results:
1234.56
3457.68
13457.7
1234.76
23456.8
You'll have to use the STR function and correct format parameters to try to get your results. For example,
SELECT STR(num, 10, 2) as s
FROM number_like_test
gives:
1234.56
3457.68
13457.68
1234.76
23456.78
Pretty well solved already, but you only need to CAST once, not twice like the other answer suggests, LIKE takes care of the string conversion:
SELECT *
FROM number_like_test
WHERE CAST(num AS DECIMAL(12,6)) LIKE '%68%'
And here's a SQL Fiddle showing the rounding behavior: SQL Fiddle
It's probably because a FLOAT data type represents a floating point number which is an approximation of the number and should not be relied on for exact comparisons.
If you need to do a search that includes the float value you would need to either store it in a decimal data type (which will hold the exact number) or convert it to a varchar using something like the STR() function

Multiplication with NULL and empty column values in SQL

This was my Interview Question
there are two columns called Length and Breadth in Area table
Length Breadth Length*Breadth
20 NULL ?
30 ?
21.2 1 ?
I tried running the same question on MYSQL while inserting,To insert an empty value I tried the below query . Am I missing anything while inserting empty values in MYSQL.
insert into test.new_table values (30,);
Answers: With Null,Result is Null.
With float and int multiplication result is float
As per your question the expected results would be as below.
SELECT LENGTH,BREADTH,LENGTH*BREADTH AS CALC_AREA FROM AREA;
LENGTH BREADTH CALC_AREA
20
30 0 0
21.2 1 21.2
For any(first) record in SQL SERVER if you do computation with NULL the answer would be NULL.
For any(second) record in SQL SERVER, if you do product computation between a non-empty value and an empty value the result would be zero as empty value is treated as zero.
For any(third) record in SQL SERVER, if you do computation between two non-empty data type values the answer would be a NON-EMPTY value.
Check SQL Fiddle for reference - http://sqlfiddle.com/#!3/f250a/1
That blank Breath (second row) cannot happen unless Breath is VARCHAR. Assuming that, the answers will be:
NULL (since NULL times anything is NULL)
Throws error (since an empty string is not a number. In Sql Server, the error is "Error converting data type varchar to numeric.")
21.20 (since in Sql Server, for example, conversion to a numeric type is automatic, so SELECT 21.2 * '1' returns 21.20).
Assuming that Length and Breadth are numerical types of some kind the second record does not contain possible values — Breadth must be either 0 or NULL.
In any event, any mathematical operation in SQL involving a NULL value will return the value NULL, indicating that the expression cannot be evaluated. The answer are NULL, impossible, and 21.2.
The product of any value and NULL is NULL. This is called "NULL propagation" if you want to Google it. To score points in an interview, you might want to mention that NULL isn't a value; it's a special marker.
The fact that the column Breadth has one entry "NULL" and one entry that's blank (on the second row) is misleading. A numeric column that doesn't have a value in a particular row means that row is NULL. So the second column should also show "NULL".
The answer to the third row, 21.2 * 1, depends on the data type of the column "Length*Breadth". If it's a data type like float, double, or numberic(16,2), the answer is 21.2. If it's an integer column (integer, long, etc.), the answer is 21.
A more snarky answer might be "There's no answer. The string "Length*Breadth" isn't a legal SQL column name."
In standard SQL they would all generate errors because you are comparing values (or nulls) of different types:
CAST ( 20 AS FLOAT ) * CAST ( NULL AS INTEGER ) -- mismatched types error
CAST ( '' AS INTEGER ) -- type conversion error
CAST ( AS INTEGER ) -- type conversion error
CAST ( 21.2 AS FLOAT ) * CAST ( 2 AS INTEGER ) -- mismatched types error
On the other hand, most SQL product would implicitly cast values when comparing values (or nulls) of different types according to type precedence e.g. comparing float value to an integer value would in effect cast the integer to float and result in a float. At the product level, the most interesting question is what happens when you compare a null of type integer with a value (or even a null) of type float...
...but, frankly, not terribly interesting. In an interview you are presented with a framework (in the form of questions asked of you) on which to present your knowledge, skills and experience. The 'answer' here is to discuss nulls (e.g. point out that nulls are tricky to define and behave in unintuitive ways, which leads to frequent bugs and a desire to avoid nulls entirely, etc) and whether implicit casting is a good thing.