Ingres multiplication gives wrong result - sql

I have an Ingres table with following columns
from_date ingresdate
to_date ingresdate
model_amt money
The dates can reflect a period of any number of days, and the model_amt is always a weekly figure. I need to work out the the total model_amt for the period
To do this I need to know how many days are covered by the period, and then divide model_amt by 7, and multiply it by the number of days
however, I am getting incorrect results using the code below
select model_amt, date_part('day',b.to_date - b.from_date),
model_amt / 7 * int4( (date_part('day',b.to_date - b.from_date)) )
from table
For example, where model_amt = 88.82 and the period is for 2 weeks, I get the following output
+-------------------------------------------------------+
¦model_amt ¦col2 ¦col3 ¦
+--------------------+-------------+--------------------¦
¦ #88.82¦ 14¦ #177.66¦
+-------------------------------------------------------+
But 88.82 / 7 * 14 = 177.64, not 177.66?
Any ideas what is going on? The same issue happens regardless of whether I include the int4 function around the date_part.
* Update 15:28 *
The solution was to add a float8 function around the model_amt
float8(model_amt)/ 7 * interval('days', to_date - from_date)
Thanks for the responses.

In computers, floating point numbers are notoriously inaccurate. You can multiply do all kinds of basic mathematics calculations on floating point numbers and they'll be off by a few decimals.
Some information can be found here; but its very googleable :). http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm
Generally to avoid inaccuracies, you need to use a language specific feature (e.g. BigDecimal in Java) to "perfectly" store the decimals. Alternatively, you can represent decimals as separate integers (e.g. main number is one integer and the decimal is another integer) and combine them later.
So, I suspect this is just ingres showing the normal floating point inaccuracies and that there are known workarounds for it in that database.
Update
Here's a support article from Actian specifically about ingres floating point issues which seems useful: https://communities.actian.com/s/article/Floating-Point-Numbers-Causes-of-Imprecision.

Related

Oracle SQL: Is There a Maximum Date Difference Oracle Can Interpret

I'm working on sql that looks for rows in a table where the rows 'last_run' date + 'frequency' (in minutes), is greater than the current date/time. I've noticed that there appears to be an upper bound for date comparisons Oracle can make sense of.
For example this query;
with tests as
(
select
'TEST 1' as code,
99999999 as frequency,
sysdate as last_run
from dual
union
select
'TEST 2' as code,
99999999999 as frequency,
sysdate as last_run
from dual
)
select
p.*,
(p.last_run + p.frequency / 24 / 60 ) as next_run
from tests p
where (p.last_run + p.frequency / 24 / 60 < sysdate or p.last_run is null)
I would expect this query to return null but instead it returns;
CODE
FREQUENCY
LAST_RUN
NEXT_RUN
TEST 2
99999999999
05-OCT-2021 10:15:46 AM
15-APR-4455 08:54:46 PM
I can solve the problem by setting frequency = null and my other code will recognize that the row need not be considered, but it seems strange to me that Oracle can't recognize that the year 4455 > 2021.
Is there some maximum conceivable date in Oracle that I'm unaware of?
I'm running this in Oracle SQL Developer Version 18.2.0.183 and Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production.
it seems strange to me that Oracle can't recognize that the year 4455 > 2021
It can. The problem is that your year isn't 4455; it's -4455. See this db<>fiddle, showing the result (in a different timezone) with default DD-MON-RR format, your output format, and ISO format with the year sign included (S format element).
CODE
FREQUENCY
LAST_RUN
NEXT_RUN
TEST 2
99999999999
2021-10-05 17:16:21
-4454-03-12 03:55:21
With your frequency of 99999999999 the value you are adding to the current date is 69444444 days, which is (very roughly) 190128 years - clearly that's going to put you well past the maximum date of 9999-12-31; and indeed with a different value like 9999999999 (one less digit), which is 6944444 days or roughly 19012 years, you get an error - also shown in that db<>fiddle.
The issue seems to be how Oracle manipulates its internal representation when it does the calculation; in adding that large value it appears that the year - which is stored in two bytes - is overflowing and wrapping.
Using the type-13 version, 190128+2021 = 192149, which is (256 * 750) + 149. 750 doesn't fit in one byte, so you get the modulus, which is 238. That would make the first two bytes of the calculated date come out as 149,238. That actually corresponds to year -4459:
select dump(date '-4459-01-01') from dual;
Typ=13 Len=8: 149,238,1,1,0,0,0,0
which is close enough to demonstrate that's what's happening - given that the calculation is outside the expected range and it's probably trying to do invalid leap day calculations in there somewhere. The point, though, is that the generated, wrapped, value represents a valid year in that internal notation.
With the lower value, 19012+2021 = 20133, which is (256 * 82) + 41. Now there is no wrapping, so the first two bytes of the calculated date come out as 41,82. That is now not a valid year, so Oracle knows to throw the ORA-01841 exception.
So, you need to limit the frequency value to a number that won't ever go past 9999-12-31, or test it at run time against 9999-12-31 minus the current date - and if it's too big, ignore it. That's if you want what appears to be a magic number at all.
There is a maximum date in Oracle, it is 9999-12-31 23:59:59 in YYYY-MM-DD HH24:MI:SS format. Here is a screenshot of the Oracle Documentation:
Here is the Oracle Documentation which talks about the valid date values (LINK)
The problem is that you are adding ~190,258 years with your second query. Likely overflowing the buffer many times over. It just so happened that you ended up back at the value you did.

Calculate conversion rate in SQL

The code below calculates the conversion rate of a dataset. The code relevant to this question is in line 13. When I calculate the conversion rate, I divide the total number of purchases made on the website by the total number of users (people who browse) on the website. The output I get is 0.495 but I don't understand why I need the '1.0 *' at the start of line 13 for this to work? I don't know the purpose of this part of the code, but without it the code doesn't work.
Your code is not MySQL code. MySQL does not support square braces around identifiers. One database that does is SQL Server. And it does integer division. So, in SQL Server, 1/2 is 0 rather than 0.5.
The * 1.0 simply converts the integer to something with a decimal point.
Assuming userid is not NULL, this is more easily expressed as:
avg( is_purchase * 1.0 )
forpas and Gordon Linoff are correct that SQL Server performs integer division by default. In future, and for more complex calculations, you can use CTEs or subqueries employing CAST to represent values as floating point values prior to division. E.g., at lines 6-7 you would have:
CAST(p.userid IS NOT NULL AS float) AS is_purchase

SQL Calculation to 2 decimal places

I've got a simple calculation (910 / 28 = 3.5) and I'm trying to perform this in a SQL query:
SELECT CONVERT(DECIMAL(5,2), (910 / 28),2) AS average
But the answer is coming out at 32.00, I'm obviously missing something simple could someone spare a moment to point out my error please?
Thanks,
C
Use this:
SELECT CONVERT(DECIMAL(5,2), (910.0 / 28)) AS average
By taking the quotient as 910.0 / 28 SQL Server will retain decimal precision. Then, make your cast to a decimal with two places. By the way, as far as I know CONVERT typically takes just two parameters when converting a number to decimal.
we can use this query for dynamic value from table:
SELECT CONVERT(DECIMAL(5,2), (cast (910 as decimal))/ 28) AS average
It will give the desire output
Unsure if this applies to your database, but in Trino SQL (a sort of database middleware layer), I find adding a decimal point followed by two zeros to any of two operands in this query (e.g., select 910.00/23 AS average or select 910/23.00 AS average) returns a non-integer value (39.57, instead of 39).
Adding 3 zeros after the decimal (select 910.000/23 AS average) returns a 3-decimal place result (39.565), and so on.
Try this query...
SELECT CAST(910 AS decimal(5,2)) / CAST(28 AS decimal(5,2)) as average
Try use this
select cast(round(910/28.0,2) as numeric(36,2)) AS average

NZSQL/Code - How to set NZSQL to NOT round to the nearest whole number

Everyone.
I am using some functions in NZSQL and am finding that NZSQL is rounding to the nearest whole number and was not sure if this is the design of the functions, or if the rounding could be disabled.
One of the functions that I am using is
TO_NUMBER(AGE(column_a),'000')
and it rounds to the nearest number, but I would like to have it leave it at a decimal number. Something like 12.42. Is this possible? Should I be using a different function? I have tried using '00.000) but it still rounds...
Thanks in advance!
The AGE function returns an interval which may not behave as you'd expect/hope when paired up with TO_NUMBER's format templates. The shape of the templates has a particular meaning that is different than what you might intuit.
For example, here I have a format template that corresponds to NUMERIC(20,6)
SYSTEM.ADMIN(ADMIN)=> select age('01/01/1960'::date) , to_number(age('01/01/1960'::date),'99999999999999999999.999999');
AGE | TO_NUMBER
-----------------------------------+---------------------
54 years 11 mons 15 days 23:17:21 | 541115231721.000000
(1 row)
Here you can see the interval expressed as digits in the result of the TO_NUMBER. The first two digits represent the 54 years, the next two represent the 11 months, and the last two before the decimal point represent the 21 seconds. Note that there is no value past the decimal point, and this is expected (well, by the design if not by us).
If we take one 9 off the template to the right or the left of the decimal point we get a malformed response. Notice that the 48 seconds is truncated to just a 4.
SYSTEM.ADMIN(ADMIN)=> select age('01/01/1960'::date) , to_number(age('01/01/1960'::date),'9999999999999999999.999999'), to_number(age('01/01/1960'::date),'99999999999999999999.99999');
AGE | TO_NUMBER | TO_NUMBER
-----------------------------------+--------------------+--------------------
54 years 11 mons 15 days 23:27:48 | 54111523274.000000 | 54111523274.000000
(1 row)
The point there was just to highlight that the format of the TO_NUMBER template does something other than what you likely expect/want.
What you probably want instead (if I get the right gist from your comment) is something like this, which uses DATE_PART as a loose substitute for DATEDIFF:
SYSTEM.ADMIN(ADMIN)=> select date_part('day',now() - '01/01/1960'::date) / 365.242;
?COLUMN?
------------
54.9580826
(1 row)

SQL Server CEILING of 100 = 101? [duplicate]

This question already has answers here:
SQL Server Strange Ceiling() behavior
(2 answers)
Closed 8 years ago.
I've found a really weird behavior in SQL Server 2012, the CEILING of 100 gives me 101 and sometimes 100.
I need to get the ceiling of a number considering 2 decimals, that means convert a 0.254 to 0.26
So I tried to run
SELECT CEILING(field * 100.0) / 100.0
FROM Table
This should work, and it does, at least for most of the data.
Any idea on how to solve it?
What you are seeing here is floating point errors. When you store a number in a floating point column, it isn't exact, so the number 1 may actually be 1.0000000000000000000000001. So multiplying it by 100 gives you a number a tiny bit greater than 100, hence CEILING rounds it up to 101.
The solution is to ROUND the number first which will remove the floating point errors. Note I have used 5 as the number of decimal places, you will need to decide on your own value of precision.
SELECT CEILING(ROUND(field,5)*100.0)/100.0 FROM Table