Daylight savings time and UTC time - sql

We use a program that saves the time-stamps in UTC time. We are a local to Utah company so we are affected by Daylight Savings time.
For example if we receive a call right now it is 12:52:00 MST and it would be saved in the database as 19:52:00.
My first concern is next year when DST starts again on March 13th 2016 and I run this at the exact same time. Will the time stamp in UTC be then 18:52:00 or would it stay at 19:52:00?
My second concern is if I convert the date in the database to my local time so I have to first check if it DST and then if it is take the time -6 and if not it would be -7?
So using the above example:
IsDST = 01:52:00 (-6)
IsNotDST = 12:52:00 (-7)
I assume this is something I need to worry about having to convert to/from UTC?
My main question aside from the two concerns above. Is there anything built into SQL Server/T-SQL that handles this conversion for me or do I need to write everything myself to take care of the need?
I have it started already, but now need to work in the DST if it is necessary
DECLARE #declared_start_datetime DATETIME,
#declared_end_datetime DATETIME,
#converted_start_datetime DATETIME,
#converted_end_datetime DATETIME
SET #declared_start_datetime = '11/04/2015 07:00:00' -- Hour we open phones
SET #declared_end_datetime = '11/04/2015 18:00:00' -- Hour we close phones
SET #converted_start_datetime = DATEADD(second, DATEDIFF(second, GETDATE(), GETUTCDATE()), #declared_start_datetime)
SET #converted_end_datetime = DATEADD(second, DATEDIFF(second, GETDATE(), GETUTCDATE()), #declared_end_datetime)
select #declared_start_datetime as 'Declared Start',
#declared_end_datetime as 'Declared End'
select #converted_start_datetime as 'Converted Start',
#converted_end_datetime as 'Converted End'

For example if we receive a call right now it is 12:52:00 MST and it would be saved in the database as 19:52:00.
My first concern is next year when DST starts again on March 13th 2016 and I run this at the exact same time. Will the time stamp in UTC be then 18:52:00 or would it stay at 19:52:00?
Mountain Standard Time is UTC-7, and US Mountain Daylight time is UTC-6. It's a lot easier to reason about if you write out the full date, time, and offset(s) involved in the conversion. Here it is in standard ISO8601 extended format:
2015-11-06T12:52:00-07:00 = 2015-11-06T19:52:00Z
2016-03-13T12:52:00-06:00 = 2016-03-13T18:52:00Z
Each local time on the left side of the equation is marked with the correct local time and local offset for that time. Then to get to UTC (identified by Z), you simply subtract the offset from the local time. Or, think of it as inverting the sign and adding, if that's easier to rationalize.
So yes, it would store it at 18:52:00 UTC when you are in daylight time. This is the correct behavior.
My second concern is if I convert the date in the database to my local time so I have to first check if it DST and then if it is take the time -6 and if not it would be -7?
Yes, but keep in mind that it's the date and time reflected by the timestamp you're converting. It makes no difference whether you are currently in DST or not.
However, keep in mind that time zone conversion should usually be avoided in the database layer, if you can all help it. In the vast majority of use cases, it's an application layer concern.
For example, if you're writing to SQL Server from an application built in .NET, then you could use the TimeZoneInfo class with the "Mountain Standard Time" ID (which is for both MST and MDT). Or, you could use the Noda Time library with the TZDB identifier of "America/Denver".
By using a library, you don't have to concern yourself with all of the various details of when DST starts and stops, nor how it has changed throughout history in different parts of the world.
In the rarer case where you actually need time zone conversion done at the database level, you can certainly write a stored procedure or UDF of your own (such as some of the question comments linked to), but depending on your needs they may not be sufficient. Typically they tend to encode just one set of fixed rules for time zone conversions, so they won't take other time zones or historical changes into account.
There are a few generic time zone solutions for SQL Server, but unlike other database platforms, there's nothing built in. I'll recommend my SQL Server Time Zone Support OSS project, and there are others if you search. But really, you should hopefully not need this, and should do the conversion in the application layer whenever possible.
Update: With SQL Server 2016 CTP 3.1, there is now built-in support for time zones via the AT TIME ZONE statement. See the CTP announcement for examples.

Related

SQL Query to Update Existing DateTime field and values in DB to PST from GMT

I am looking for the best way to write a query that takes an existing DateTime field/value in a database (there are over 50K rows) and updates the DateTime value to a different time zone. What I want to do is change the existing date/time value from GMT time (the current server time) to PST (would be minus 7 hours I think). I want to change the values in the database then I will change the server time on the server to PST to reflect the time zone so that all new records show the PST time zone.
Any help on this would be appreciated, thanks in advance for your time.
You probably shouldn't do this at all. Keep your data in GMT (really, UTC) in the database, and convert it to pacific time in your application logic as needed. For example, if you connect to you SQL Server from a .NET application, query the UTC time from the database and convert it using TimeZoneInfo.ConvertTimeFromUtc
The biggest reason for this is that Pacific time uses daylight saving time. So while part of the year it follows PST (which is UTC-8), during the summer it follows PDT (which is UTC-7). That means the amount of time to adjust by is variable, and depends on the date itself.
Not only that, but a value that occurs during the fall-back transition is ambiguous in local time. For example, 2015-11-01 at 1:30 AM occurs twice in Pacific time. If you convert all your data to Pacific time, you'll potentially lose some information.
If you really need to convert between time zones directly in SQL Server, consider using my SQL Server Time Zone Support project. For example:
SELECT Tzdb.UtcToLocal('2015-07-01 00:00:00', 'America/Los_Angeles')
Unfortunately, SQL Server does not have any built-in timezone conversion functionality. However, this can be accomplished via SQLCLR. You can create a scalar User-Defined Functions using the DateTime.ToLocalTime() method.
Yes, technically you can use the TimeZoneInfo class which has methods such as ConvertTime(DateTime, TimeZoneInfo, TimeZoneInfo) and ConvertTimeFromUtc(DateTime, TimeZoneInfo), but the TimeZoneInfo class requires that the assembly have a PERMISSION_SET of UNSAFE whereas DateTime can run as SAFE. The only drawback to sticking with the DateTime methods is that they can only convert between UTC and the server's local time, as opposed to TimeZoneInfo which can convert between any two given time zones.
Since your data is already in GMT / UTC, it would be fairly easy to convert to Pacific* time using DateTime.ToLocalTime, but only if you change your server's time zone before updating the data, and not after (as is mentioned in the question). At which point you could do something like the following:
ALTER TABLE [SchemaName].[TableName]
ADD [IsConverted] BIT NULL; -- make sure we don't convert multiple times
WHILE (1 = 1)
BEGIN
UPDATE TOP (2000) tab
SET tab.DateField = dbo.ConvertTimeToLocalTime(tab.DateField),
tab.IsConverted = 1
FROM [SchemaName].[TableName] tab
WHERE tab.IsConverted IS NULL;
IF (##ROWCOUNT = 0)
BEGIN
BREAK;
END;
END;
-- Remove the temporary tracking field, but not until all rows are converted
IF (NOT EXISTS(
SELECT *
FROM [SchemaName].[TableName] tab
WHERE tab.IsConverted IS NULL
)
)
BEGIN
ALTER TABLE [SchemaName].[TableName]
DROP COLUMN [IsConverted];
END;
* Please note that I used "Pacific" time instead of PST because Pacific time can be either PST or PDT depending on Daylight Savings. Hence, trying to convert by simply doing DATEADD(HOUR, -7, [DateField]) would be correct for some rows, but not all rows.
Also, for those interested in this particular conversion function, it is available (though not for free) in the Full version of the SQL# library (which I am the author of). There are some blog posts around that show how to do it, though I have not seen one that follows best practices such that it will perform optimally. Of course, if this is a one-time conversion, then optimal performance for a price is admittedly less important than less-efficient-but-free :-).
You can change the database time zone by using the SET TIME_ZONE clause of the ALTER DATABASE statement. For example:
ALTER DATABASE SET TIME_ZONE='Europe/London';ALTER DATABASE SET TIME_ZONE='-05:00';
YOu can refer to the following link for further details.
http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch4datetime.htm#NLSPG263

Storing DateTime (UTC) vs. storing DateTimeOffset

I usually have an "interceptor" that right before reading/writing from/to the database does DateTime conversion (from UTC to local time, and from local time to UTC), so I can use DateTime.Now (derivations and comparisions) throughout the system without worrying about time zones.
Regarding serialization and moving data between computers, there is no need to bother, as the datetime is always UTC.
Should I continue storing my dates (SQL 2008 - datetime) in UTC format or should I instead store it using DateTimeOffset (SQL 2008 - datetimeoffset)?
UTC Dates in the database (datetime type) have been working and known for so long, why change it? What are the advantages?
I have already looked into articles like this one, but I'm not 100% convinced though. Any thoughts?
There is one huge difference, where you cannot use UTC alone.
If you have a scenario like this
One server and several clients (all geographically in different timezones)
Clients create some data with datetime information
Clients store it all on central server
Then:
datetimeoffset stores Local time of the client and ALSO offset to the UTC time
all clients know UTC time of all data and also a local time in the place where the information originated
But:
UTC datetime stores just UTC datetime, so you do not have information about local time in the client location where data originated
Other clients do not know the local time of the place, where datetime information came from
Other clients can only calculate their local time from the database (using UTC time) not the local time of the client, where the data originated
Simple example is flight ticket reservation system ... Flight ticket should contain 2 times:
- "take off" time (in timezone of "From" city)
- "landing" time (in timezone of "Destination" city)
You are absolutely correct to use UTC for all historical times (i.e. recording events happened). It is always possible to go from UTC to local time but not always the other way about.
When to use local time? Answer this question:
If the government suddenly decide to change daylight savings, would you like this
data to change with it?
Only store local time if the answer is "yes". Obviously that will only be for future dates, and usually only for dates that affect people in some way.
Why store a time zone/offset?
Firstly, if you want to record what the offset was for the user who carried out the action, you would probably be best just doing that, i.e. at login record the location and timezone for that user.
Secondly if you want to convert for display, you need to have a table of all local time offset transitions for that timezone, simply knowing the current offset is not enough, because if you are showing a date/time from six months ago the offset will be different.
A DATETIMEOFFSET gives you the ability to store local time and UTC time in one field.
This allows for very simple and efficient reporting in local or UTC time without the need to process the data for display in any way.
These are the two most common requirements - local time for local reports and UTC time for group reports.
The local time is stored in the DATETIME portion of the DATETIMEOFFSET and the OFFSET from UTC is stored in the OFFSET portion, thus conversion is simple and, since it requires no knowledge of the timezone the data came from, can all be done at database level.
If you don't require times down to milliseconds, e.g. just to minutes or seconds, you can use DATETIMEOFFSET(0). The DATETIMEOFFSET field will then only require 8 bytes of storage - the same as a DATETIME.
Using a DATETIMEOFFSET rather than a UTC DATETIME therefore gives more flexibility, efficiency and simplicity for reporting.

Calculate Daylight Savings Time (DST) on SQL/Database level

My location in Sydney, Australia. The dates that I explain will be in UK or Australia date format.
Observe the following:
2010-04-15 04:30:00.000 => 15/04/2010 14:30:00 EST (UK date format - Add 10 hours)
2010-11-05 01:00:00.000 => 05/11/2010 12:00:00 EST (UK date format - Add 11 hours)
Both these times are retrieved from the database in UTC format and then calculated on the Web level whether +10 or +11 hour is applicable.
In Australia, Daylight Savings Time (DST) transition dates vary year by year. The transition dates are usually Early April and Late October.
So how accurate would the Web calculation be? If this year the transition date is a few days later (say 03/04/2010), but the Web calculation bases on a fixed date (say 01/04/2010), wouldn't that mean that the days in between will be off by 1 hour when displayed (due to the fixed calculation nature to a specific day of the month)?
I believe the transition dates is not pre-determined and is actually announced to the public. Is that assumption true?
If not (that means the DST dates are pre-determined), would I be able to do the calculation outside the Web level (on the SQL/Database level)?
The database is SQL Server 2005 and I'm using Report Definition Language (RDL) to display the fields in UTC time. If SQL/database level is not the best way, how do I work out +10 or +11 and format the time accordingly to show the right time?
Thank you.
The database is a bad choice for it: it has less information than c# or .net to work it out. .net uses the registry which is kept upto date periodically by patches. SQL Server would have to have a table with date ranges and offsets.
The transitions are fixed because of scheduling (flights, trains ,whatever). IIRC it only changed once at short notice recently in Australia for some Olympic games and it caused chaos around the world. In 2007 the US changed but this was known in advance.
By fixed, it's the "last Sunday" type fixed even if the date varies.
I would leave it in the web code: the DB does not know where your caller is for example, the web site can work it out.
The problem is whoever wrote this app does not quite understand UTC, its value, and how to use it. The database is the correct location for the dat, but the system is not using UTC as intended.
If you use UTC, then all your date arithmetic should use UTC. In the database. It is currently using a saved UTC and then converting at some (doesn't matter if seciond tier or third) other layer; some other library. Half UTC, and Half something else. Have you considered historic dates, as in what is the DATEDIFF() between 15 Feb 2010 and today ?
This eliminates the concern re DST in Australia or Greenland.; and concern re what date/time the changeover actually happens. Everyone is using Greenwich Mean Time for that particular day.
Do all you date arithmetic in the db, in UTC. And display the result (only) in the local time zone, which as you have it, is the web layer, based on the user.
Many systems have dropped that last step altogether, and display in UTC only, regardless of the user's time zone.
The database can handle DST for you. Use its time zone conversion functions to go from whatever zone you stored the dates in to whatever zone you want to get for the user.
MySQL has CONVERT_TZ(), I don't know what other RDBMS's have.

Oracle Date field - Time issues

We have two databases, in two separate locations. One of the databases resides in a separate time zone than our users.
The problem is that when the database that is located in a separate time zone is updated, with a Date value, the database automatically subtracts 1:00 hour from the Date it was passed.
The issue is that, when passing a NULL date (12:00:00), the DAY value is changed to a previous day.
The updates are done via stored procedures, and the front end is a VB.NET smartclient.
How would you handle this the proper way? I basically don't even want to store the TIME at all, but I can't seem to figure out how to do that.
Not clear on what datetime you want in the database, or what the application is passing.
Assume the user's PC is telling him it is Tuesday, 12:30am, and the clock on the Db server is saying Monday, 11:30pm.
If you insert a value for the 'current date' (eg TRUNC(SYSDATE)) then, as far as the database is concerned, it is still Monday.
If you insert a value for the 'current time (eg SYSDATE), it is also still Monday.
if you insert a value for the session's current time (eg CURRENT_TIMESTAMP) and timezone and ask the database to store it in the database, it will store 11:30pm.
If you ask the database to store the datetime '2009-12-31 14:00:00', then that is what it will store. If you ask it to store the datetime/timezone '2009-12-31 14:00:00 +08:00', then you are in the advanced manual. You can ask the database to store timestamps with timezone data. Also consider daylight saving
I would investigate using the TRUNC function in your stored proc method that updates the table. If the data type in the method (that updates the table) is not a DATE type then use the to_date function in conjunction with the TRUNC function.
This is outside of the scope of the question you are asking, but I would recommend in ALL cases where users are accessing a database from different time zones, the server and database clocks time zone should be set to UTC. It is probably too late for that, but setting the datbase server to UTC eliminates the problems caused by daylight savings time and different time zones.
In my opionion, Date/Time data can and should always be stored in UTC. This data can be converted to local time at the point where it is presented to the user. Oracle actually makes this easy with the TIMESTAMP with TIME ZONE data type. It allows you to access the data either as UTC (SYS_EXTRACT_UTC) or local time (Local to the database server.)
It is never the same day all places in the world, so dates cannot be considered without time.
Of course another of my opinions is that Daylight Savings time should be eliminated. But that is another topic.

Determine Time Zone Offset in T-SQL

My database application is going to be deployed at multiple sites in different time zones.
I need a T-SQL function that will determine the UTC timestamp of midnight on January 1 of the current year for YTD calculations. All of the data is stored in UTC timestamps.
For example, Chicago is UTC-6 with Daylight Savings Time (DST), the function needs to return '2008-01-01 06:00:00' if run any time in 2008 in Chicago. If run in New York (GMT-5 + DST) next year, it needs to return '2009-01-01 05:00:00'.
I can get the current year from YEAR(GETDATE()). I thought I could do a DATEDIFF between GETDATE() and GETUTCDATE() to determine the offset but the result depends on whether the query is run during DST or not. I do not know of any built in T-SQL functions for determining the offset or whether or not the current time is DST or not?
Does anyone have a solution to this problem in T-SQL? I could hard code it or store it in a table but would prefer not to. I suppose that this is a perfect situation for using CLR Integration in SQL Server 2005. I am just wondering if there is a T-SQL solution that I am unaware of?
Check out this previous question and answer for related information:
Effectively Converting dates between UTC and Local (ie. PST) time in SQL 2005
(To summarize, you do need to build time zone and DST tables in Sql Server 2005. In the next version of Sql Server we get some help with time zones.)
Unless I'm mistaken, the GETUTCDATE() function uses the time zone defined on the server - it has no information regarding the client's time zone (or any time zone). I don't think that information is stored anywhere in SQL Server 2005, which makes it impossible for it to calculate this information.
Maybe you could 'borrow' the data from Oracle's time zone file and build your own SQL Server function?
Off topic (could be useful to someone else) but if you were using Oracle, you could use the FROM_TZ function and 'AT TIME ZONE':
FROM_TZ(YOUR_TIMESTAMP, 'UTC') AT TIME ZONE 'America/Dawson_Creek'
Hmm, I guess I'm not understanding the problem. If the database app is already storing UTC timestamps for all of it's transactions, and you want to sum up some values since the first of the year "local time", your condition would have to be something like:
(timestamp + (getutcdate() - getdate())) > cast('01/01/2008' as datetime)
The DST can be on or off depending on when in the year the query is run - but getdate() takes it into account, so you have to dynamically calculate the offset every time.