Hello PostgreSQL experts.
I'm trying to understand why these 2 Boolean expressions return different results.
The first returns TRUE whereas the second returns FALSE.
SELECT CAST('2019-01-01T12:00:00' AS TIMESTAMP) - CAST('2018-01-01T13:00:00' AS TIMESTAMP) <= INTERVAL '365 DAYS',
CAST('2019-01-01T12:00:00' AS TIMESTAMP) - CAST('2018-01-01T13:00:00' AS TIMESTAMP) <= INTERVAL '1 YEAR';
Neither 2019 nor 2018 were leap years.
I expected that for non-leap years, a 1 year interval will be equivalent to a 365 day interval, but I'm obviously wrong.
Tested with PostgreSQL 15.
Your help will be highly appreciated!
Edit:
So looks like this is more of a bug than it is a feature.
"IEC/ISO 9075-2:2016 SQL foundations" defines 2 types of intervals. One is called year-month interval and the other day-time interval. Each type is comparable only with itself. Therefore, the 2nd predicate should have raised an error for incompatible types. This would have saved a lot of headaches for everyone who uses it. If there are PostgreSQL contributors reading this, I think this should be considered for implementation in a future release.
Postgres appears to define an interval of 1 year as 365.25 days.
That would be because the interval type does not include a rooted start and end time. It's a size without position.
If you like, you can compare 1 metre and 1 metre from my chair.
So interval doesn't know which year you're talking about.
So it averages it to 365.25 days which is about what you'd get if you averaged most 4 year periods.
You can check this with:
select extract(epoch from interval '1 year')
This gives 31557600 and you can do the maths from there.
Edit: I got curious after some comments and discovered:
a) it's more complicated than I could possibly have imagined
b) it's a bit of a moving target.
This commit in April 2022 claims to undo a regression which (afaict) due to rounding made it 365 days not 365.25. It links to the commit which (apparently) introduced that, which was April 2021. The fix specifically mentions it relies on DAYS_PER_YEAR being a multiple of 0.25.
https://github.com/postgres/postgres/commit/f2a2bf66c87e14f07aefe23cbbe2f2d9edcd9734
The version of Postgres that I got 365.25 in is 14.6
SELECT VERSION() gives
PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
So I'd guess my install is from before that rounding issue was introduced, and #Atmo has one from after that, and before this fix got released.
(All conclusions made here were from reading comments, not code).
Dates and times. Hard.
Related
presto> select date '0001-01-01';
Expected result: '0001-01-01'
Actual result: '0001-01-02'
Day is increasing by one not sure why.
Can anyone help me on this that how can i resolve this?
If i run the above query on presto-cli then result comes as '0001-01-01' which is expected but when i run same query via SQL workbench then result comes as '0001-01-02' which is actual result.
This is because Presto uses internally Joda Time and Java Time for server APIs and JDBC uses (per JDBC standard) java.sql.Date and related classes.
Both Joda Time and Java Time use proleptic Gregorian calendar. This means they count days as if Gregorian calendar existed since ever.
java.sql.Date (and java.util.Calendar for that matter) uses Julian-Gregorian calendar, which may seem to be more accurate. It accounts for the fact that Julian calendar was used until some day and then Gregorian calendar was and still is used. (Of course, it's not and cannot be accurate, because the cut over date varies from country to country and spans several centuries).
My recommendation is: use varchar for representing historical dates and date times.
As of Sunday we've been running into a problem with our query:
SELECT COUNT(*) / 5 AS count
FROM [dbo].[LogisticalOrderLines_shadow]
WHERE SentToBI >= DATEADD(MINUTE, 55, GETDATE())
This should return the average messages sent per minute (based on the last 5 minutes) However since Monday this query has not been returning the expected results.
We are using our local time zone (was utc+1, now is utc+2) to store the time in our database. In order to prevent having to change this every half year i would like to turn this into a query which always functions no matter the timezone the server is in. but this seems to be quite an issue since we don't store utc times...
How would I go about this if it is even possible.
PS: a very strange thing happened where just for a few hours this morning I had changed the query to use -5 instead of +55 and it actually worked, that was the only way to get the correct amounts. Now however I had to change it again to 115 (which is actually what I would expect, it being UTC +2 and such). So this was quite strange and I do not have an explanation for it.
Date and Timestamps both have time added and interval is used in case of manipulation of dates via addition yearwise, datewise etc.
Still unsure about the exact actual difference though when it comes to dates in oracle especially.
Is there any major difference in terms of efficiency or some other difference on the usage of date, timestamp and interval?
Your question is not clear but this information may help you.
TIMESTAMP supports fractional seconds, unlike DATE which supports only seconds
TIMESTAMP exist in three flavors:
TIMESTAMP does not contain any time zone information.
TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE contain time zone information
Regarding calculation and manipulation there is actually no difference between TIMESTAMP and DATE. There are only a very few functions which support only either of these two types.
DATE is an old data type. TIMESTAMP was introduced later (well "later" means in 9i, i.e. 20 years ago)
INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND are interval data types, they do not contain any absolute date information.
Hope this gave some hints. Otherwise please elaborate your question.
date does not store fractional seconds so comparing time with date less than 1 sec wont work!!!!!
I'm trying to calculate difference between two dates in postgres and found out that on several cases my tests fail, while debugging I found an interesting thing - when I subtract one date from another it seems to lack one hour, here's the script (table has only one timestamp field):
select now(), d1, now() - d1, extract(day from date_trunc('day', now() - d1))
from test;
And here's the result:
This seemed strange, so I decided to check it with some other service and got the result I expected (23 hrs instead of 22):
(see https://www.timeanddate.com/date/durationresult.html?d1=2&m1=3&y1=2019&d2=1&m2=4&y2=2019&h1=23&i1=55&s1=00&h2=23&i2=48&s2=30).
Can somebody explain these results? Am I doing something wrong or missing something obvious? I'm using Postgres 9.6 on macOS.
Many countries switch to daylight savings time between March 2nd and April 1st. Because the clocks move ahead there is one less hour between 2.March.2019 and 1.April.2019.
Beware that Postgres has its own time zone which may not match the user's time zone, especially for a web application. To deal with this, set the application to the user's time zone and the database to UTC. Translate all dates to UTC before passing them to the database.
I need to add decorators that will represent from 6 days ago till now.
how should I do it?
lets say the date is realative 604800000 millis from now and it's absolute is 1427061600000
#-604800000
#1427061600000
#now in millis - 1427061600000
#1427061600000 - now in millis
Is there a difference by using relative or absolute times?
Thanks
#-518400000--1
Will give you data for the last 6 days (or last 144 hours).
I think all you need is to read this.
Basically, you have the choice of #time, which is time since Epoch (your #1427061600000). You can also express it as a negative number, which the system will interpret as NOW - time (your #-604800000). These both work, but they don't give the result you want. Instead of returning all that was added in that time range, it will return a snapshot of your table from 6 days ago....
Although you COULD use that snapshot, eliminate all duplicates between that snapshot and your current table, and then take THOSE results as what was added during your 6 days, you're better off with :
Using time ranges directly, which you cover with your 3rd and 4th lines. I don't know if the order makes a difference, but I've always used #time1-time2 with time1<time2 (in your case, #1427061600000 - now in millis).