Which one:
datetime
datetime2
is the recommended way to store date and time in SQL Server 2008+?
I'm aware of differences in precision (and storage space probably), but ignoring those for now, is there a best practice document on when to use what, or maybe we should just use datetime2 only?
The MSDN documentation for datetime recommends using datetime2. Here is their recommendation:
Use the time, date, datetime2 and
datetimeoffset data types for new
work. These types align with the SQL
Standard. They are more portable.
time, datetime2 and datetimeoffset
provide more seconds precision.
datetimeoffset provides time zone
support for globally deployed
applications.
datetime2 has larger date range, a larger default fractional precision, and optional user-specified precision. Also depending on the user-specified precision it may use less storage.
DATETIME2 has a date range of "0001 / 01 / 01" through "9999 / 12 / 31" while the DATETIME type only supports year 1753-9999.
Also, if you need to, DATETIME2 can be more precise in terms of time; DATETIME is limited to 3 1/3 milliseconds, while DATETIME2 can be accurate down to 100ns.
Both types map to System.DateTime in .NET - no difference there.
If you have the choice, I would recommend using DATETIME2 whenever possible. I don't see any benefits using DATETIME (except for backward compatibility) - you'll have less trouble (with dates being out of range and hassle like that).
Plus: if you only need the date (without time part), use DATE - it's just as good as DATETIME2 and saves you space, too! :-) Same goes for time only - use TIME. That's what these types are there for!
datetime2 wins in most aspects except (old apps Compatibility)
larger range of values
better Accuracy
smaller storage space (if optional user-specified precision is specified)
please note the following points
Syntax
datetime2[(fractional seconds precision=> Look Below Storage Size)]
Precision, scale
0 to 7 digits, with an accuracy of 100ns.
The default precision is 7 digits.
Storage Size
6 bytes for precision less than 3;
7 bytes for precision 3 and 4.
All other precision require 8 bytes.
DateTime2(3) have the same number of digits as DateTime but uses 7 bytes of storage instead of 8 byte (SQLHINTS- DateTime Vs DateTime2)
Find more on datetime2(Transact-SQL MSDN article)
image source :
MCTS Self-Paced Training Kit (Exam 70-432): Microsoft® SQL Server® 2008 - Implementation and Maintenance
Chapter 3:Tables -> Lesson 1: Creating Tables -> page 66
I concurr with #marc_s and #Adam_Poward -- DateTime2 is the preferred method moving forward. It has a wider range of dates, higher precision, and uses equal or less storage (depending on precision).
One thing the discussion missed, however...
#Marc_s states: Both types map to System.DateTime in .NET - no difference there. This is correct, however, the inverse is not true...and it matters when doing date range searches (e.g. "find me all records modified on 5/5/2010").
.NET's version of Datetime has similar range and precision to DateTime2. When mapping a .net Datetime down to the old SQL DateTime an implicit rounding occurs. The old SQL DateTime is accurate to 3 milliseconds. This means that 11:59:59.997 is as close as you can get to the end of the day. Anything higher is rounded up to the following day.
Try this :
declare #d1 datetime = '5/5/2010 23:59:59.999'
declare #d2 datetime2 = '5/5/2010 23:59:59.999'
declare #d3 datetime = '5/5/2010 23:59:59.997'
select #d1 as 'IAmMay6BecauseOfRounding', #d2 'May5', #d3 'StillMay5Because2msEarlier'
Avoiding this implicit rounding is a significant reason to move to DateTime2. Implicit rounding of dates clearly causes confusion:
Strange datetime behavior in SQL Server
http://bytes.com/topic/sql-server/answers/578416-weird-millisecond-part-datetime-data-sql-server-2000-a
SQL Server 2008 and milliseconds
http://improve.dk/archive/2011/06/16/getting-bit-by-datetime-rounding-or-why-235959-999-ltgt.aspx
http://milesquaretech.com/Blog/post/2011/09/12/DateTime-vs-DateTime2-SQL-is-Rounding-My-999-Milliseconds!.aspx
Almost all the Answers and Comments have been heavy on the Pros and light on the Cons. Here's a recap of all Pros and Cons so far plus some crucial Cons (in #2 below) I've only seen mentioned once or not at all.
PROS:
1.1. More ISO compliant (ISO 8601) (although I don’t know how this comes into play in practice).
1.2. More range (1/1/0001 to 12/31/9999 vs. 1/1/1753-12/31/9999) (although the extra range, all prior to year 1753, will likely not be used except for ex., in historical, astronomical, geologic, etc. apps).
1.3. Exactly matches the range of .NET’s DateTime Type’s range (although both convert back and forth with no special coding if values are within the target type’s range and precision except for Con # 2.1 below else error / rounding will occur).
1.4. More precision (100 nanosecond aka 0.000,000,1 sec. vs. 3.33 millisecond aka 0.003,33 sec.) (although the extra precision will likely not be used except for ex., in engineering / scientific apps).
1.5. When configured for similar (as in 1 millisec not "same" (as in 3.33 millisec) as Iman Abidi has claimed) precision as DateTime, uses less space (7 vs. 8 bytes), but then of course, you’d be losing the precision benefit which is likely one of the two (the other being range) most touted albeit likely unneeded benefits).
CONS:
2.1. When passing a Parameter to a .NET SqlCommand, you must specify System.Data.SqlDbType.DateTime2 if you may be passing a value outside the SQL Server DateTime’s range and/or precision, because it defaults to System.Data.SqlDbType.DateTime.
2.2. Cannot be implicitly / easily converted to a floating-point numeric (# of days since min date-time) value to do the following to / with it in SQL Server expressions using numeric values and operators:
2.2.1. add or subtract # of days or partial days. Note: Using DateAdd Function as a workaround is not trivial when you're needing to consider multiple if not all parts of the date-time.
2.2.2. take the difference between two date-times for purposes of “age” calculation. Note: You cannot simply use SQL Server’s DateDiff Function instead, because it does not compute age as most people would expect in that if the two date-times happens to cross a calendar / clock date-time boundary of the units specified if even for a tiny fraction of that unit, it’ll return the difference as 1 of that unit vs. 0. For example, the DateDiff in Day’s of two date-times only 1 millisecond apart will return 1 vs. 0 (days) if those date-times are on different calendar days (i.e. “1999-12-31 23:59:59.9999999” and “2000-01-01 00:00:00.0000000”). The same 1 millisecond difference date-times if moved so that they don’t cross a calendar day, will return a “DateDiff” in Day’s of 0 (days).
2.2.3. take the Avg of date-times (in an Aggregate Query) by simply converting to “Float” first and then back again to DateTime.
NOTE: To convert DateTime2 to a numeric, you have to do something like the following formula which still assumes your values are not less than the year 1970 (which means you’re losing all of the extra range plus another 217 years. Note: You may not be able to simply adjust the formula to allow for extra range because you may run into numeric overflow issues.
25567 + (DATEDIFF(SECOND, {d '1970-01-01'}, #Time) + DATEPART(nanosecond, #Time) / 1.0E + 9) / 86400.0 – Source: “ https://siderite.dev/blog/how-to-translate-t-sql-datetime2-to.html “
Of course, you could also Cast to DateTime first (and if necessary back again to DateTime2), but you'd lose the precision and range (all prior to year 1753) benefits of DateTime2 vs. DateTime which are prolly the 2 biggest and also at the same time prolly the 2 least likely needed which begs the question why use it when you lose the implicit / easy conversions to floating-point numeric (# of days) for addition / subtraction / "age" (vs. DateDiff) / Avg calcs benefit which is a big one in my experience.
Btw, the Avg of date-times is (or at least should be) an important use case. a) Besides use in getting average duration when date-times (since a common base date-time) are used to represent duration (a common practice), b) it’s also useful to get a dashboard-type statistic on what the average date-time is in the date-time column of a range / group of Rows. c) A standard (or at least should be standard) ad-hoc Query to monitor / troubleshoot values in a Column that may not be valid ever / any longer and / or may need to be deprecated is to list for each value the occurrence count and (if available) the Min, Avg and Max date-time stamps associated with that value.
Here is an example that will show you the differences in storage size (bytes) and precision between smalldatetime, datetime, datetime2(0), and datetime2(7):
DECLARE #temp TABLE (
sdt smalldatetime,
dt datetime,
dt20 datetime2(0),
dt27 datetime2(7)
)
INSERT #temp
SELECT getdate(),getdate(),getdate(),getdate()
SELECT sdt,DATALENGTH(sdt) as sdt_bytes,
dt,DATALENGTH(dt) as dt_bytes,
dt20,DATALENGTH(dt20) as dt20_bytes,
dt27, DATALENGTH(dt27) as dt27_bytes FROM #temp
which returns
sdt sdt_bytes dt dt_bytes dt20 dt20_bytes dt27 dt27_bytes
------------------- --------- ----------------------- -------- ------------------- ---------- --------------------------- ----------
2015-09-11 11:26:00 4 2015-09-11 11:25:42.417 8 2015-09-11 11:25:42 6 2015-09-11 11:25:42.4170000 8
So if I want to store information down to the second - but not to the millisecond - I can save 2 bytes each if I use datetime2(0) instead of datetime or datetime2(7).
DateTime2 wreaks havoc if you are an Access developer trying to write Now() to the field in question. Just did an Access -> SQL 2008 R2 migration and it put all the datetime fields in as DateTime2. Appending a record with Now() as the value bombed out. It was okay on 1/1/2012 2:53:04 PM, but not on 1/10/2012 2:53:04 PM.
Once character made the difference. Hope it helps somebody.
Interpretation of date strings into datetime and datetime2 can be different too, when using non-US DATEFORMAT settings. E.g.
set dateformat dmy
declare #d datetime, #d2 datetime2
select #d = '2013-06-05', #d2 = '2013-06-05'
select #d, #d2
This returns 2013-05-06 (i.e. May 6) for datetime, and 2013-06-05 (i.e. June 5) for datetime2. However, with dateformat set to mdy, both #d and #d2 return 2013-06-05.
The datetime behavior seems at odds with the MSDN documentation of SET DATEFORMAT which states: Some character strings formats, for example ISO 8601, are interpreted independently of the DATEFORMAT setting. Obviously not true!
Until I was bitten by this, I'd always thought that yyyy-mm-dd dates would just be handled right, regardless of the language / locale settings.
Old Question... But I want to add something not already stated by anyone here... (Note: This is my own observation, so don't ask for any reference)
Datetime2 is faster when used in filter criteria.
TLDR:
In SQL 2016 I had a table with hundred thousand rows and a datetime column ENTRY_TIME because it was required to store the exact time up to seconds. While executing a complex query with many joins and a sub query, when I used where clause as:
WHERE ENTRY_TIME >= '2017-01-01 00:00:00' AND ENTRY_TIME < '2018-01-01 00:00:00'
The query was fine initially when there were hundreds of rows, but when number of rows increased, the query started to give this error:
Execution Timeout Expired. The timeout period elapsed prior
to completion of the operation or the server is not responding.
I removed the where clause, and unexpectedly, the query was run in 1 sec, although now ALL rows for all dates were fetched. I run the inner query with where clause, and it took 85 seconds, and without where clause it took 0.01 secs.
I came across many threads here for this issue as datetime filtering performance
I optimized query a bit. But the real speed I got was by changing the datetime column to datetime2.
Now the same query that timed out previously takes less than a second.
cheers
while there is increased precision with datetime2, some clients doesn't support date, time, or datetime2 and force you to convert to a string literal. Specifically Microsoft mentions "down level" ODBC, OLE DB, JDBC, and SqlClient issues with these data types and has a chart showing how each can map the type.
If value compatability over precision, use datetime
According to this article, if you would like to have the same precision of DateTime using DateTime2 you simply have to use DateTime2(3). This should give you the same precision, take up one fewer bytes, and provide an expanded range.
I just stumbled across one more advantage for DATETIME2: it avoids a bug in the Python adodbapi module, which blows up if a standard library datetime value is passed which has non-zero microseconds for a DATETIME column but works fine if the column is defined as DATETIME2.
As the other answers show datetime2 is recommended due to smaller size and more precision, but here are some thoughts on why NOT to use datetime2 from Nikola Ilic:
lack of (simple) possibility to do basic math operations with dates, like GETDATE()+1
every time you are doing comparisons with DATEADD or DATEDIFF, you will finish with implicit data conversion to datetime
SQL Server can’t use statistics properly for Datetime2 columns, due to a way data is stored that leads to non-optimal query plans, which decrease the performance
I think DATETIME2 is the better way to store the date, because it has more efficiency than
the DATETIME. In SQL Server 2008 you can use DATETIME2, it stores a date and time, takes 6-8 bytes to store and has a precision of 100 nanoseconds. So anyone who needs greater time precision will want DATETIME2.
Accepted answer is great, just know that if you are sending a DateTime2 to the frontend - it gets rounded to the normal DateTime equivalent.
This caused a problem for me because in a solution of mine I had to compare what was sent with what was on the database when resubmitted, and my simple comparison '==' didn't allow for rounding. So it had to be added.
datetime2 is better
datetime range : 1753-01-01 through 9999-12-31 , datetime2 range : 0001-01-01 through 9999-12-31
datetime Accuracy : 0.00333 second , datetime2 Accuracy : 100 nanoseconds
datetime get 8 bytes , datetime2 get 6 to 8 bytes depends on precisions
(6 bytes for precision less than 3 , 7 bytes for precision 3 or 4 , All other precision require 8 bytes, Click and Look at the below picture)
Select ValidUntil + 1
from Documents
The above SQL won't work with a DateTime2 field.
It returns and error "Operand type clash: datetime2 is incompatible with int"
Adding 1 to get the next day is something developers have been doing with dates for years. Now Microsoft have a super new datetime2 field that cannot handle this simple functionality.
"Let's use this new type that is worse than the old one", I don't think so!
Related
This question already has answers here:
SQL Server : DATEADD() not working beyond milliseconds when using a string value
(2 answers)
Closed 1 year ago.
A date is stored as string in the database:
2021-12-15T14:18:22.6496978Z
I try to convert it to datetime:
CONVERT(DATETIME, '2021-12-15T14:18:22.6496978Z', 127)
using 127 which refers to yyyy-mm-ddThh:mi:ss.mmmZ, which I believe is the right style for the input date. But I am getting
Conversion failed when converting date and/or time from character string.
Any ideas why?
DATETIME and SMALLDATE are legacy types(as in replaced-15-years-ago, don't-use legacy) that have a lot of quirks and limited precision. For example, datetime is only accurate to 0, 3 or 7 milliseconds. The value you tried to parse can't be converted to a datetime without losing precision.
The docs warn strongly against using this type, with a big pink warning at the top of the DATETIME page:
Use the time, date, datetime2 and datetimeoffset data types for new work. These types align with the SQL Standard. They are more portable. time, datetime2 and datetimeoffset provide more seconds precision. datetimeoffset provides time zone support for globally deployed applications.
In this case you need the datetime2 or datetimeoffset types introduced in 2005. Both types allow specifying a precision.
To preserve the timezone offset, use datetimeoffset.
select CONVERT(datetimeoffset, '2021-12-15T14:18:22.6496978Z', 127)
----
2021-12-15 14:18:22.6496978 +00:00
To remove the offset, use datetime2. The result will have no assumed offset so you should take care to always treat it as UTC:
select CONVERT(datetimeoffset, '2021-12-15T14:18:22.6496978Z', 127)
----
2021-12-15 14:18:22.6496978
In both cases you can specify the desired precision. For example, datetime2(0) will truncate fractional seconds:
select CONVERT(datetime2(0), '2021-12-15T14:18:22.6496978Z', 127)
---
2021-12-15 14:18:23
I had a stored procedure comparing two dates. From the logic of my application, I expected them to be equal. However, the comparison failed. The reason for this was the fact that one of the values was stored as a DATETIME and had to be CONVERT-ed to a DATETIME2 before being compared to the other DATETIME2. Apparently, this changed its value. I have run this little test:
DECLARE #DateTime DATETIME='2018-01-18 16:12:25.113'
DECLARE #DateTime2 DATETIME2='2018-01-18 16:12:25.1130000'
SELECT #DateTime, #DateTime2, DATEDIFF(NANOSECOND, #DateTime, #DateTime2)
Which gave me the following result:
Why is there the difference of 333333ns between these values? I thought that a DATETIME2, as a more precise type, should be able to accurately represent all the values which can be stored in a DATETIME? The documentation of DATETIME2 only says:
When the conversion is from datetime, the date and time are copied. The fractional precision is extended to 7 digits.
No warnings about the conversion adding or subtracting 333333ns to or from the value! So why does this happen?
I am using SQL Server 2016.
edit: Strangely, on a different server I am getting a zero difference. Both are SQL Server 2016 but the one where I have the problem has compatibility level set to 130, the one where I don't has it set to 120. Switching between them changes this behaviour.
edit2: DavidG suggested in the comments that the value I am using can be represented as a DATETIME2 but not a DATETIME. So I have modified my test to make sure that the value I am assigning to #DateTime2 is a valid DATETIME value:
DECLARE #DateTime DATETIME='2018-01-18 16:12:25.113'
DECLARE #DateTime2 DATETIME2=CONVERT(DATETIME2, #DateTime)
SELECT #DateTime, #DateTime2, DATEDIFF(NANOSECOND, #DateTime, #DateTime2)
This helps a little because the difference is smaller but still not zero:
A breaking change was introduced in SQL Server 2016 with regards to conversion and comparison of datetime and datetime2. The changes are detailed in this knowledge base article.
In summary, values were rounded during the conversion in SQL 2014 and earlier versions whereas the full precision is considered nowadays. This improves performance but introduces issues when converting and comparing these unlike types.
datetime2 is shorthand for datetime2(7), which indicates you want 7 digits for fractional seconds (the maximum). Try a datetime2(3) if you want something closer to a datetime.
Also, be aware that datetime2(3) is more precise than a datetime. The latter rounds to the nearest 0.000, 0.003, or 0.007 by design.
Based on this MSDN blog post, DATETIME precision is .00333 seconds, while DATETIME2 (or DATETIME2(7) explicitly) has 100 ns precision. So even comparing DATETIME to DATETIME2(3), which would seem to have the same precision, DATETIME2(3) is more precise.
This weird 3.33 ms precision of DATETIME is the reason why when comparing seemingly equal values, you can get a difference.
To me, when you do comparison, you actually should convert data with high precision to low precision to avoid such "difference"
DECLARE #DateTime DATETIME='2018-01-18 16:12:25.113'
DECLARE #DateTime2 DATETIME2='2018-01-18 16:12:25.1130000'
SELECT #DateTime, cast(#DateTime2 as datetime), DATEDIFF(NANOSECOND, #DateTime, cast(#DateTime2 as datetime))
The result is
What are the implications of using SQL Server's DateTime2 with a precision of 0 to represent a date rather than the built in Date field.
In either case, my concern is to prevent accidental time entries, but are there storage or performance considerations I should take note of?
DateTime2(0) will store datetime with no decimal values i.e YYYY-MM-DD hh:mm:ss
SELECT CONVERT(DateTime2(0) , GETDATE())
RESULT: 2015-04-06 20:47:17
Storing data just as dates will only store dates i.e YYYY-MM-DD without any time values.
SELECT CONVERT(Date , GETDATE())
RESULT: 2015-04-06
If you are only interested in dates then use DATE data type.
DATETIME2 will use 6 bytes for precisions less than 3 and DATE will use 3 bytes.
Date is half the size of DATETIME(0) hence it will also perform better since sql server will process less data and will save disk space as well.
It won't work. According to MSDN the minimum size of Datetime2 is six bytes and will contain hh:mm:ss so it can, and will, contain a time component (default of midnight). As other responders have noted you must use a date type to guarantee that not time portion is saved and will occupy three bytes.
https://technet.microsoft.com/en-us/library/bb677335%28v=sql.105%29.aspx
Just a reminder which I ran into myself when I converted a couple of DATETIME2(0) columns to DATE to make sure it aligned better with the value in the column (date only).
When using DATE you cannot use things like SELECT MyDate + 1 FROM.. or WHERE MyDate>0 while when using DATETIME2 you can, at least not in MS-SQL. Ofcourse IMHO it doesn't make any sense why DATETIME2 will allow you to do it and DATE not.
I want to save Date and Time of the user on various activities performed. For date I have decided to use DateTime Column in Database and for Time I am in dilemma what datatype to go for.
I know in sql server 2008 Time datatype has been introduced but I am using older version i.e. Sql Server 2005 so I need your suggest to prove my understanding true or false.
I have seen people using varchar or DateTime for storing time into database. But I am looking towards usage of Integer datatype.
Reason for my selection is performance.
Following is the justification that I am giving to myself.
Assumptions
Any data saved into database must agree following rules
Date will be stored in format mm/dd/yyyy hh:MM:ss where hh:MM:ss will always be 00:00:00
Time will be stored in valid format (from hh:MM:ss as hhMMss)
if hh is 00
then MMss
and if MM is 00
then ss
and if ss is 00
then 0
hh will range in between 0-23
MM will range in between 0-59
ss will range in between 0-59
i.e. few examples
00:00:00 = 0
00:01:00 = 100
01:00:00 = 10000
13:00:00 = 130000
Personal thought why it will perform better.
SELECT * FROM Log WHERE loginDate = '05/23/2011'
AND loginTime BETWEEN 0 AND 235959 --Integer Comparison
When using JOINS on the basis of DateTime considering join for Date part only.
JOIN two tables on the basis of Common Dates irrespective of Time.I think Type Conversion would heavily impact in such cases if using DateTime as the storage datatype.
Since Sql will have to do an integer comparison and no typecasting would be required hence it should perform better.
EDIT
One drawback I just identified is when I want to get the difference between two times that how much time has been spent between 3 days, hopefully then it would become a nightmare to manage throughout the application.
So why do you need 2 columns. If the DateTime column (loginDate) has an empty time 00:00:00 why not just use that empty space for loginTime and have one column.
WHERE loginDate >= '05/23/2011' AND loginDate < '05/24/2011'
If you're intent on using an integer, there's nothing wrong with it.
Bearing your edit in mind, your ideal solution is to put both date and time in the same column, a DATETIME:
You can then trivially figure the difference between start and end times with DATEDIFF
You can easily establish just the date portion with CONVERT(varchar(10), loginDate, 101)
You can easily establish just the time portion with CONVERT(varchar(10), loginDate, 108)
Storage issues might be resolved by using SMALLDATETIME, if precision < 1minute isn't required. SMALLDATETIME requires four bytes per column, which is the same as INTEGER, so you're making a significant net gain over using two columns.
I have a datetime column in the db and when I test setting it
DateTime dateTime = DateTime.Now;
state.LastUpdated = dateTime;
Assert.AreEqual(dateTime , state.LastUpdated);
I get the following error
Assert.AreEqual failed. Expected:<3/2/2011 9:52:32 AM>. Actual:<3/2/2011 9:52:00 AM>.
What's the granularity of SQL datetime and is it possible to tune it for more granularity?
SQL Server is accurate to rounded increments of 0, 3 and 7 milliseconds http://msdn.microsoft.com/en-us/library/ms187819.aspx. You can't tune it for more granularity.
.Net DateTime is much more granular - smaller than milliseconds, it can also contain ticks. You need to keep this into account when asserting your test.
If you need more precision, you could always use a bigint in Sql Server instead of a DateTime, and store the number of ticks. (DateTime has a constructor that accepts an Int64 number of ticks.)
Use datetime2, which has up to 100 nanoseconds precision. msdn.microsoft.com/en-us/library/bb677335.aspx
Noted by #Remus in a comment, posting it as an answer for posterity.
SQL Datetime can represent a date down to fractions of a second,and, according to : http://msdn.microsoft.com/en-us/library/ms187819.aspx : datetime values are rounded to increments of .000, .003, or .007 seconds