Postgres timestamp difference seem to miss 1 hour? - sql

I'm trying to calculate difference between two dates in postgres and found out that on several cases my tests fail, while debugging I found an interesting thing - when I subtract one date from another it seems to lack one hour, here's the script (table has only one timestamp field):
select now(), d1, now() - d1, extract(day from date_trunc('day', now() - d1))
from test;
And here's the result:
This seemed strange, so I decided to check it with some other service and got the result I expected (23 hrs instead of 22):
(see https://www.timeanddate.com/date/durationresult.html?d1=2&m1=3&y1=2019&d2=1&m2=4&y2=2019&h1=23&i1=55&s1=00&h2=23&i2=48&s2=30).
Can somebody explain these results? Am I doing something wrong or missing something obvious? I'm using Postgres 9.6 on macOS.

Many countries switch to daylight savings time between March 2nd and April 1st. Because the clocks move ahead there is one less hour between 2.March.2019 and 1.April.2019.
Beware that Postgres has its own time zone which may not match the user's time zone, especially for a web application. To deal with this, set the application to the user's time zone and the database to UTC. Translate all dates to UTC before passing them to the database.

Related

Date vs Timestamp vs Interval (Second to Day, etc.) on the basis of performance, efficiency and superiority in Oracle

Date and Timestamps both have time added and interval is used in case of manipulation of dates via addition yearwise, datewise etc.
Still unsure about the exact actual difference though when it comes to dates in oracle especially.
Is there any major difference in terms of efficiency or some other difference on the usage of date, timestamp and interval?
Your question is not clear but this information may help you.
TIMESTAMP supports fractional seconds, unlike DATE which supports only seconds
TIMESTAMP exist in three flavors:
TIMESTAMP does not contain any time zone information.
TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE contain time zone information
Regarding calculation and manipulation there is actually no difference between TIMESTAMP and DATE. There are only a very few functions which support only either of these two types.
DATE is an old data type. TIMESTAMP was introduced later (well "later" means in 9i, i.e. 20 years ago)
INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND are interval data types, they do not contain any absolute date information.
Hope this gave some hints. Otherwise please elaborate your question.
date does not store fractional seconds so comparing time with date less than 1 sec wont work!!!!!

SQL query date according to time zone

We are using a Vertica database with table columns of type timestamptz, all data is inserted according to the UTC timezone.
We are using spring-jdbc's NamedParameterJdbcTemplate
All queries are based on full calendar days, e.g. start date 2013/08/01 and end date 2013/08/31, which brings everything between '2013/08/01 00:00:00.0000' and '2013/08/31 23:59:59.9999'
We are trying to modify our queries to consider timezones, i.e. I can for my local timezone I can ask for '2013/08/01 00:00:00.0000 Asia/Jerusalem' till '2013/08/31 23:59:59.9999 Asia/Jerusalem', which is obviously different then '2013/08/01 00:00:00.0000 UTC' till '2013/08/31 23:59:59.9999 UTC'.
So far, I cannot find a way to do so, I tried setting the timezone in the session:
set timezone to 'Asia/Jerusalem';
This doesn't even work in my database client.
Calculating the difference in our Java code will not work for us as we also have queries returning date groupings (this will get completely messed up).
Any ideas or recommendations?
I am not familiar with Veritca, but some general advice:
It is usually best to use half-open intervals for date range queries. The start date should be inclusive, while the end date should be exclusive. In other words:
start <= date < end
or
start <= date && end > date
Your end date wouldn't be '2013/08/31 23:59:59.9999', it would instead be the start of the next day, or '2013/09/01 00:00:00.0000'. This avoids problems relating to precision of decimals.
That example is for finding a single date. Since you are querying a range of dates, then you have two inputs. So it would be:
startFieldInDatabase >= yourStartParameter
AND
endFieldInDatabase < yourEndParameter
Again, you would first increment the end parameter value to the start of the next day.
It sounds like perhaps Vertica is TZ aware, given that you talked about timestamptz types in your answer. Assuming they are similar to Oracle's TIMESTAMPTZ type, then it sounds like your solution will work just fine.
But usually, if you are storing times in UTC in your database, then you would simply convert the query input time(s) in advance. So rather than querying between '2013/08/01 00:00:00.0000' and '2013/09/01 00:00:00.0000', you would convert that ahead of time and query between '2013/07/31 21:00:00.0000' and '2013/08/31 21:00:00.0000'. There are numerous posts already on how to do that conversion in Java either natively or with Joda Time, so I won't repeat that here.
As a side note, you should make sure that whatever TZDB implementation you are using (Vertica's, Java's, or JodaTime's) has the latest 2013d update, since that includes the change for Israel's daylight saving time rule that goes into effect this year.
Okay, so apparently:
set time zone to 'Asia/Jerusalem';
worked and I just didn't realize it, but for the sake of helping others I'm going to add something else that works:
select fiels at time zone 'Asia/Jerusalem' from my_table;
will work for timestamptz fields

How to convert a unix timestamp (INT) to monetdb timestamp ('YYYY-MM-DD HH:MM:SS') local time format

Q1: I want to convert a unix timestamp (INT) to monetdb timestamp ('YYYY-MM-DD HH:MM:SS') format
but it is giving me the GMT time not my actual time.
When I do
select (epoch(cast(current_timestamp as timestamp))-epoch(timestamp '2013-04-25 11:49:00'))
where 2013-04-25 11:49:00 is my systems current time it gives the same difference
I tried using
set time zone interval '05:30' HOUR TO MINUTE;
but it did not change the result
How can I solve this problem??
Example Problem:
I wanted to convert unix timestamp 1366869289 which should be around "2013-04-25 11:25:00" but monetdb gives "2013-04-25 05:55:00"
Knowing nothing about MonetDB, but a lot about timezones, I decided to look in their documentation to see what kind of datatypes are supported and how conversions are handled.
I found this page on Temporal data types. Based on that, I can conclude that a timestamp in MonetDB is always intended to reference UTC/GMT time - which is consistent with other systems.
In order to get a value that is for a particular time zone, they offer the following example:
SET TIME ZONE INTERVAL '1' HOUR TO MINUTE
I assume this means to set the database to offset all times by 1 hour, effectively placing the values all in UTC+01:00, such as is the offset for British Summer Time.
The page also goes on to point out the problems that can arise with using just and offset to adjust time values (see TimeZone != Offset in the TimeZone tag wiki). It also offers a list of various named time zones. But it does not show how to set a time zone to one of the named values. Also, their list appears to be proprietary, and incomplete. While at first glance they appear to have similarities to the IANA/Olson time zone database - the identifiers they specify are not valid TZDB names.
There are some other functions listed on this page, without much explanation. One that looks promising for your needs is LOCALTIMESTAMP. Perhaps this will take the local time zone into account, which appears to be what you were looking for.
I could not find any additional details specific to MonetDB date/time/timezone handling. The documentation appears to be fairly incomplete. You might want to reach out to their mailing list.

Is there a simple and clean way to convert `DATE` to the `TIMESTAMP AT TIME ZONE` of that date at midnight UTC?

This should be really simple, but I'm having an embarrassing amount of trouble with it.
In PostgreSQL 9.1, I need to intepret a field stored as a DATE in the DB as if it were a TIMESTAMPTZ representing midnight UTC on that date. I'd like to do it in a clean and readable way that someone can come along, look at, and understand what's happening.
The only ways I've found to do it so far are both very ugly. One converts it to a TIMESTAMP WITHOUT TIME ZONE then creates a TIMESTAMP WITH TIME ZONE from it by interpreting it as if it were UTC:
SELECT CAST(DATE '2012-01-01' AS TIMESTAMP WITHOUT TIME ZONE) AT TIME ZONE 'utc'
The other way is worse:
('2012-01-01'::date)::timestamptz - (current_timestamp AT TIME ZONE 'UTC' - current_timestamp)
in that it converts the date to a timestamp for midnight local time, then subtracts the time zone offset. I couldn't find any way to get that offset as an interval natively (which seems crazy) so I landed up getting it by comparing current_timestamp in local time with current_timestamp in UTC.
The only other way I could work out used extract to get the date parts and assembled a new timestamptz from them. I won't even show that one, it's too ugly.
Both approaches feel all kinds of weird and wrong. Is there any sane way - standard or no - to convert in a readable and easily understood way from a DATE to a timestamptz of midnight UTC on that date?
I'm looking for something like (imaginary, won't work)
'2012-01-01'::date AS TIMESTAMPTZ IN TIME ZONE '00:00';
or
to_timestamp('2012-01-01'::date, '00:00'::time, 'UTC');
Please point out the stupidly obvious thing I'm missing.
Note that I'm testing to make sure the date is truly right internally, not just at display, with extract(epoch from $1) where $1 is the converted date.
Your first approach is correct. And it's not that ugly, is it? In simplified Postgres syntax:
SELECT '2012-1-1'::date::timestamp AT TIME ZONE 'UTC';
Applied to a variable or column it looks even more elegant:
SELECT mydate::timestamp AT TIME ZONE 'UTC';
If you are going to enter the date manually, you can shortcut to:
SELECT '2012-1-1 0:0'::timestamp AT TIME ZONE 'UTC'
The result will always be displayed according to the local timezone of the client (i.e. with the according offset), but that has no influence on the value.
Note: I don't know much about PSQL, but I've some experience with date/time matters.
Your first way feels right to me. You're effectively going from "local date" to "local date/time" to "date/time in a particular time zone". Those are all reasonable steps, and ones I'd expect to see in normal date/time APIs.
This approach never introduces the system default time zone as far as I can tell, which is a thoroughly good thing. It performs one logical step at a time, assuming that the cast the sensible thing.
You don't need to worry about the local date/time being either ambiguous or missing in the target time zone, as UTC doesn't have any DST transitions.
Basically, it looks fine. If it works and performs as well as you need it to, I'd stick with it.

Determine Time Zone Offset in T-SQL

My database application is going to be deployed at multiple sites in different time zones.
I need a T-SQL function that will determine the UTC timestamp of midnight on January 1 of the current year for YTD calculations. All of the data is stored in UTC timestamps.
For example, Chicago is UTC-6 with Daylight Savings Time (DST), the function needs to return '2008-01-01 06:00:00' if run any time in 2008 in Chicago. If run in New York (GMT-5 + DST) next year, it needs to return '2009-01-01 05:00:00'.
I can get the current year from YEAR(GETDATE()). I thought I could do a DATEDIFF between GETDATE() and GETUTCDATE() to determine the offset but the result depends on whether the query is run during DST or not. I do not know of any built in T-SQL functions for determining the offset or whether or not the current time is DST or not?
Does anyone have a solution to this problem in T-SQL? I could hard code it or store it in a table but would prefer not to. I suppose that this is a perfect situation for using CLR Integration in SQL Server 2005. I am just wondering if there is a T-SQL solution that I am unaware of?
Check out this previous question and answer for related information:
Effectively Converting dates between UTC and Local (ie. PST) time in SQL 2005
(To summarize, you do need to build time zone and DST tables in Sql Server 2005. In the next version of Sql Server we get some help with time zones.)
Unless I'm mistaken, the GETUTCDATE() function uses the time zone defined on the server - it has no information regarding the client's time zone (or any time zone). I don't think that information is stored anywhere in SQL Server 2005, which makes it impossible for it to calculate this information.
Maybe you could 'borrow' the data from Oracle's time zone file and build your own SQL Server function?
Off topic (could be useful to someone else) but if you were using Oracle, you could use the FROM_TZ function and 'AT TIME ZONE':
FROM_TZ(YOUR_TIMESTAMP, 'UTC') AT TIME ZONE 'America/Dawson_Creek'
Hmm, I guess I'm not understanding the problem. If the database app is already storing UTC timestamps for all of it's transactions, and you want to sum up some values since the first of the year "local time", your condition would have to be something like:
(timestamp + (getutcdate() - getdate())) > cast('01/01/2008' as datetime)
The DST can be on or off depending on when in the year the query is run - but getdate() takes it into account, so you have to dynamically calculate the offset every time.