SQL query for cumulative frequency of list of datetimes - sql

I have a list of times in a database column (representing visits to a website).
I need to group them in intervals and then get a 'cumulative frequency' table of those dates.
For instance I might have:
9:01
9:04
9:11
9:13
9:22
9:24
9:28
and i want to convert that into
9:05 - 2
9:15 - 4
9:25 - 6
9:30 - 7
How can I do that? Can i even easily achieve this in SQL? I can quite easily do it in C#

create table accu_times (time_val datetime not null, constraint pk_accu_times primary key (time_val));
go
insert into accu_times values ('9:01');
insert into accu_times values ('9:05');
insert into accu_times values ('9:11');
insert into accu_times values ('9:13');
insert into accu_times values ('9:22');
insert into accu_times values ('9:24');
insert into accu_times values ('9:28');
go
select rounded_time,
(
select count(*)
from accu_times as at2
where at2.time_val <= rt.rounded_time
) as accu_count
from (
select distinct
dateadd(minute, round((datepart(minute, at.time_val) + 2)*2, -1)/2,
dateadd(hour, datepart(hour, at.time_val), 0)
) as rounded_time
from accu_times as at
) as rt
go
drop table accu_times
Results in:
rounded_time accu_count
----------------------- -----------
1900-01-01 09:05:00.000 2
1900-01-01 09:15:00.000 4
1900-01-01 09:25:00.000 6
1900-01-01 09:30:00.000 7

I should point out that based on the stated "intent" of the problem, to do analysis on visitor traffic - I wrote this statement to summarize the counts in uniform groups.
To do otherwise (as in the "example" groups) would be comparing the counts during a 5 minute interval to counts in a 10 minute interval - which doesn't make sense.
You have to grok to the "intent" of the user requirement, not the literal "reading" of it. :-)
create table #myDates
(
myDate datetime
);
go
insert into #myDates values ('10/02/2008 09:01:23');
insert into #myDates values ('10/02/2008 09:03:23');
insert into #myDates values ('10/02/2008 09:05:23');
insert into #myDates values ('10/02/2008 09:07:23');
insert into #myDates values ('10/02/2008 09:11:23');
insert into #myDates values ('10/02/2008 09:14:23');
insert into #myDates values ('10/02/2008 09:19:23');
insert into #myDates values ('10/02/2008 09:21:23');
insert into #myDates values ('10/02/2008 09:21:23');
insert into #myDates values ('10/02/2008 09:21:23');
insert into #myDates values ('10/02/2008 09:21:23');
insert into #myDates values ('10/02/2008 09:21:23');
insert into #myDates values ('10/02/2008 09:26:23');
insert into #myDates values ('10/02/2008 09:27:23');
insert into #myDates values ('10/02/2008 09:29:23');
go
declare #interval int;
set #interval = 10;
select
convert(varchar(5), dateadd(minute,#interval - datepart(minute, myDate) % #interval, myDate), 108) timeGroup,
count(*)
from
#myDates
group by
convert(varchar(5), dateadd(minute,#interval - datepart(minute, myDate) % #interval, myDate), 108)
retuns:
timeGroup
--------- -----------
09:10 4
09:20 3
09:30 8

ooh, way too complicated all of that stuff.
Normalise to seconds, divide by your bucket interval, truncate and remultiply:
select sec_to_time(floor(time_to_sec(d)/300)*300), count(*)
from d
group by sec_to_time(floor(time_to_sec(d)/300)*300)
Using Ron Savage's data, I get
+----------+----------+
| i | count(*) |
+----------+----------+
| 09:00:00 | 1 |
| 09:05:00 | 3 |
| 09:10:00 | 1 |
| 09:15:00 | 1 |
| 09:20:00 | 6 |
| 09:25:00 | 2 |
| 09:30:00 | 1 |
+----------+----------+
You may wish to use ceil() or round() instead of floor().
Update: for a table created with
create table d (
d datetime
);

Create a table periods describing the periods you wish to divide the day up into.
SELECT periods.name, count(time)
FROM periods, times
WHERE period.start <= times.time
AND times.time < period.end
GROUP BY periods.name

Create a table containing what intervals you want to be getting totals at then join the two tables together.
Such as:
time_entry.time_entry
-----------------------
2008-10-02 09:01:00.000
2008-10-02 09:04:00.000
2008-10-02 09:11:00.000
2008-10-02 09:13:00.000
2008-10-02 09:22:00.000
2008-10-02 09:24:00.000
2008-10-02 09:28:00.000
time_interval.time_end
-----------------------
2008-10-02 09:05:00.000
2008-10-02 09:15:00.000
2008-10-02 09:25:00.000
2008-10-02 09:30:00.000
SELECT
ti.time_end,
COUNT(*) AS 'interval_total'
FROM time_interval ti
INNER JOIN time_entry te
ON te.time_entry < ti.time_end
GROUP BY ti.time_end;
time_end interval_total
----------------------- -------------
2008-10-02 09:05:00.000 2
2008-10-02 09:15:00.000 4
2008-10-02 09:25:00.000 6
2008-10-02 09:30:00.000 7
If instead of wanting cumulative totals you wanted totals within a range, then you add a time_start column to the time_interval table and change the query to
SELECT
ti.time_end,
COUNT(*) AS 'interval_total'
FROM time_interval ti
INNER JOIN time_entry te
ON te.time_entry >= ti.time_start
AND te.time_entry < ti.time_end
GROUP BY ti.time_end;

This uses quite a few SQL tricks (SQL Server 2005):
CREATE TABLE [dbo].[stackoverflow_165571](
[visit] [datetime] NOT NULL
) ON [PRIMARY]
GO
;WITH buckets AS (
SELECT dateadd(mi, (1 + datediff(mi, 0, visit - 1 - dateadd(dd, 0, datediff(dd, 0, visit))) / 5) * 5, 0) AS visit_bucket
,COUNT(*) AS visit_count
FROM stackoverflow_165571
GROUP BY dateadd(mi, (1 + datediff(mi, 0, visit - 1 - dateadd(dd, 0, datediff(dd, 0, visit))) / 5) * 5, 0)
)
SELECT LEFT(CONVERT(varchar, l.visit_bucket, 8), 5) + ' - ' + CONVERT(varchar, SUM(r.visit_count))
FROM buckets l
LEFT JOIN buckets r
ON r.visit_bucket <= l.visit_bucket
GROUP BY l.visit_bucket
ORDER BY l.visit_bucket
Note that it puts all the times on the same day, and assumes they are in a datetime column. The only thing it doesn't do as your example does is strip the leading zeroes from the time representation.

Related

get sales records totaling more than $1000 in any 3 hour timespan?

I'm asking because I'm not sure what to google for - attempts that seemed obvious to me returned nothing useful.
I have sales coming into the database of objects at particular datetimes with particular $ values. I want to get all groups of sales records a) within a (any, not just say "on the hour" like 1am-4am) 3 hour time frame, that b) total >= $1000.
The table looks like:
Sales
SaleId int primary key
Item varchar
SaleAmount money
SaleDate datetime
Even just a suggestion on what I should be googling for would be appreciated lol!
EDIT:
Ok after trying the cross apply solution - it's close but not quite there. To illustrate, consider the following sample data:
-- table & data script
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Sales](
[pkid] [int] IDENTITY(1,1) NOT NULL,
[item] [int] NULL,
[amount] [money] NULL,
[saledate] [datetime] NULL,
CONSTRAINT [PK_Sales] PRIMARY KEY CLUSTERED
(
[pkid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
INSERT [dbo].[Sales] VALUES (1, 649.3800, CAST(N'2017-12-31T21:46:19.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (1, 830.6700, CAST(N'2018-01-01T08:38:58.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (1, 321.0400, CAST(N'2018-01-01T09:08:04.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (3, 762.0300, CAST(N'2018-01-01T07:26:30.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (2, 733.5100, CAST(N'2017-12-31T12:04:07.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (3, 854.5700, CAST(N'2018-01-01T08:32:11.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (2, 644.1700, CAST(N'2017-12-31T17:49:59.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (1, 304.7700, CAST(N'2018-01-01T08:01:50.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (2, 415.1200, CAST(N'2017-12-31T20:27:28.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (3, 698.1700, CAST(N'2018-01-01T02:39:28.000' AS DateTime))
A simple adaptation of the cross apply solution from the comments, to go item by item:
select s.*
, s2.saleamount_sum
from Sales s cross apply
(select sum(s_in.amount) as saleamount_sum
from Sales s_in
where s.item = s_in.item
and s.saledate >= s_in.saledate and s_in.saledate < dateadd(hour, 3, s.saledate)
) s2
where s2.saleamount_sum > 1000
order by s.item, s.saledate
So the actual data (sorted by item/time) looks like:
pkid item amount saledate
1 1 649.38 2017-12-31 21:46:19.000
8 1 304.77 2018-01-01 08:01:50.000
2 1 830.67 2018-01-01 08:38:58.000
3 1 321.04 2018-01-01 09:08:04.000
5 2 733.51 2017-12-31 12:04:07.000
7 2 644.17 2017-12-31 17:49:59.000
9 2 415.12 2017-12-31 20:27:28.000
10 3 698.17 2018-01-01 02:39:28.000
4 3 762.03 2018-01-01 07:26:30.000
6 3 854.57 2018-01-01 08:32:11.000
and the result of the cross apply method:
pkid item amount saledate saleamount_sum
2 1 830.67 1/1/18 8:38 AM 1784.82
3 1 321.04 1/1/18 9:08 AM 2105.86
7 2 644.17 12/31/17 5:49 PM 1377.68
9 2 415.12 12/31/17 8:27 PM 1792.8
4 3 762.03 1/1/18 7:26 AM 1460.2
6 3 854.57 1/1/18 8:32 AM 2314.77
The issue can be seen by considering the method's analysis of Item 1. From the data, we see that FIRST sale of item 1 does not participate in a 3-hour-over-$1000. The second, third, and fourth Item 1 sales however do so participate. And they are correctly picked out, pkid = 2 and 3. But their sums aren't right - both of their sums include the very FIRST sale of Item 1, which does not participate in the timespan/amount condition. I would have expected the saleamount_sum for pkid 2 to be 1135.44, and for pkid 3 to be 1456.48 (their reported sums minus the first non-participating sale).
Hopefully that makes sense. I'll try fiddling with the cross apply query to get it. Anyone who can quickly see how to get what I'm after, please feel free to chime in.
thanks,
-sff
Here is one method using apply:
select t.*, tt.saleamount_sum
from t cross apply
(select sum(t2.saleamount) as saleamount_sum
from t t2
where t2.saledate >= t.saledate and t2.saledate < dateadd(hour, 3, t.saledate)
) tt
where tt.saleamount_sum > 1000;
Edit:
If you want this per item (which is not specified in the question), then you need a condition to that effect:
select t.*, tt.saleamount_sum
from t cross apply
(select sum(t2.saleamount) as saleamount_sum
from t t2
where t2.item = t.item and t2.saledate >= t.saledate and t2.saledate < dateadd(hour, 3, t.saledate)
) tt
where tt.saleamount_sum > 1000;
Your query had one wrong comparison (s.saledate >= s_in.saledate) instead of s_in.saledate >= s.saledate. The inner query below looks for the next 3 hours for each row of the outer query.
Sample data
DECLARE #Sales TABLE (
[pkid] [int] IDENTITY(1,1) NOT NULL,
[item] [int] NULL,
[amount] [money] NULL,
[saledate] [datetime] NULL
);
INSERT INTO #Sales VALUES (1, 649.3800, CAST(N'2017-12-31T21:46:19.000' AS DateTime))
INSERT INTO #Sales VALUES (1, 830.6700, CAST(N'2018-01-01T08:38:58.000' AS DateTime))
INSERT INTO #Sales VALUES (1, 321.0400, CAST(N'2018-01-01T09:08:04.000' AS DateTime))
INSERT INTO #Sales VALUES (3, 762.0300, CAST(N'2018-01-01T07:26:30.000' AS DateTime))
INSERT INTO #Sales VALUES (2, 733.5100, CAST(N'2017-12-31T12:04:07.000' AS DateTime))
INSERT INTO #Sales VALUES (3, 854.5700, CAST(N'2018-01-01T08:32:11.000' AS DateTime))
INSERT INTO #Sales VALUES (2, 644.1700, CAST(N'2017-12-31T17:49:59.000' AS DateTime))
INSERT INTO #Sales VALUES (1, 304.7700, CAST(N'2018-01-01T08:01:50.000' AS DateTime))
INSERT INTO #Sales VALUES (2, 415.1200, CAST(N'2017-12-31T20:27:28.000' AS DateTime))
INSERT INTO #Sales VALUES (3, 698.1700, CAST(N'2018-01-01T02:39:28.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:01.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:02.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:03.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:04.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:05.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:06.000' AS DateTime))
Query
select
s.*
, s2.saleamount_sum
from
#Sales AS s
cross apply
(
select sum(s_in.amount) as saleamount_sum
from #Sales AS s_in
where
s.item = s_in.item
and s_in.saledate >= s.saledate
and s_in.saledate < dateadd(hour, 3, s.saledate)
) AS s2
where s2.saleamount_sum > 1000
order by s.item, s.saledate
;
Result
+------+------+--------+-------------------------+----------------+
| pkid | item | amount | saledate | saleamount_sum |
+------+------+--------+-------------------------+----------------+
| 8 | 1 | 304.77 | 2018-01-01 08:01:50.000 | 1456.48 |
| 2 | 1 | 830.67 | 2018-01-01 08:38:58.000 | 1151.71 |
| 7 | 2 | 644.17 | 2017-12-31 17:49:59.000 | 1059.29 |
| 4 | 3 | 762.03 | 2018-01-01 07:26:30.000 | 1616.60 |
| 11 | 4 | 600.00 | 2018-01-01 02:39:01.000 | 3600.00 |
| 12 | 4 | 600.00 | 2018-01-01 02:39:02.000 | 3000.00 |
| 13 | 4 | 600.00 | 2018-01-01 02:39:03.000 | 2400.00 |
| 14 | 4 | 600.00 | 2018-01-01 02:39:04.000 | 1800.00 |
| 15 | 4 | 600.00 | 2018-01-01 02:39:05.000 | 1200.00 |
+------+------+--------+-------------------------+----------------+
I added 6 rows with item=4 to the sample data. They are all within 3 hours and there are 5 subsets of these 6 rows that have a sum larger than 1000. Technically this result is correct, but do you really want this kind of result?
To get all sales within a specified hours interval:
SELECT SaleId, sum(SaleAmount) as amount FROM Sales WHERE (HOUR(SaleDate) BETWEEN 1 AND 4) GROUP BY SaleId HAVING amount >=1000;
You can add other conditions in WHERE clause.
If you're looking for periods like 0:00-3:00, 3:00-6:00, you can group by those intervals. The following query rounds the hour to multiples of three, combines it with the date, and groups on that:
select format(dt, 'yyyy-mm-dd') + ' ' +
cast(datepart(hour, dt) / 3 * 3 as varchar(2)) as period
, sum(amount) as total
from sales
group by
format(dt, 'yyyy-mm-dd') + ' ' +
cast(datepart(hour, dt) / 3 * 3 as varchar(2))
having sum(amount) > 1000
Working example at regtester.
If you're looking for any 3 hour period, like 0:33-3:33 or 12:01-15:01, see Gordon Linoff's answer.

Split project date range into rows of work weeks for all projects in SQL

I have a projects table with a total_hours column and a startdate, enddate column.
If a project has a date range of 5 weeks, I need a query that returns 5 rows with the incremented work week number in a calculated field for all projects.
Here is my table data with a query showing the range in work week format.
drop table #temp
CREATE TABLE #Temp
(ProjectID int, Total_Hours int, StartDate datetime, EndDate datetime)
;
INSERT INTO #Temp
(ProjectID, Total_Hours, StartDate, EndDate)
VALUES
(645, 555, '2016-01-01 00:00:00', '2016-02-01 00:00:00'),
(700, 234, '2015-01-14 00:00:00', '2016-02-01 00:00:00')
Select datepart(week,startdate),datepart(week,Enddate) from #Temp
I need a query that will return the following values
ProjectID WW
645 1
645 2
645 3
645 4
645 5
645 6
700 3
700 4
700 5
700 6
I feel I should use recursion but don't know how.
You could do it with recursion but a numbers table is generally more efficient:
with n as (
select row_number() over (order by (select null)) - 1 as n
from master.spt_values
)
select t.projectid, dateadd(week, n.n, t.startdate) as ww
from #Temp t join
n
on dateadd(week, n.n, t.startdate) <= t.enddate;
If you prefer a recursive query, use
with t as (
select projectid,datepart(week,startdate) sw,datepart(week,enddate) ew from #Temp
union all
select projectid,sw+1,ew from t where sw < ew
)
select projectid, sw
from t
order by 1,2
Sample Demo

SQL Server 2012: GROUP BY seconds, minutes, hours

I am trying to group records in SQL Server 2012 by DateTime. I found an example on stackoverflow that does partially what I want, but the problem is that it does not group correct when it exceeds the range. If, for example, I group on minutes in blocks of 30 minutes the result is correct (see query and result below). But if I group on blocks of 120 minutes, I got the exact same result. It keeps grouping at its maximum of 60 minutes in an hour (result below).
The problem is that the grouping can not take it's parent DateTime element (seconds to minutes, minutes to hours, ... , even seconds to hours,...). It is kinda logic cause I only check at minutes, but I would like to see it pass hours also. Maybe with a DATEADD(), but I don't manage to get it working.
Any ideas??
A (small) example to show what I mean:
Query:
DECLARE #TimeInterval int, #StartTime DateTime, #EndTime Datetime
SET #TimeInterval = 30
SET #StartTime='2015-01-01T08:00:00Z'
SET #EndTime = '2015-01-05T10:00:00Z'
declare #T table
(
Value datetime
);
insert into #T values ('2015-01-01T08:00:00');
insert into #T values ('2015-01-01T08:03:00');
insert into #T values ('2015-01-01T08:06:00');
insert into #T values ('2015-01-01T08:14:00');
insert into #T values ('2015-01-01T09:06:00');
insert into #T values ('2015-01-01T09:07:00');
insert into #T values ('2015-01-01T09:08:00');
insert into #T values ('2015-01-01T11:09:00');
insert into #T values ('2015-01-01T12:10:00');
insert into #T values ('2015-01-01T13:11:00');
insert into #T values ('2015-01-02T08:08:00');
insert into #T values ('2015-01-02T08:09:00');
insert into #T values ('2015-01-03T08:10:00');
insert into #T values ('2015-01-04T08:11:00');
SELECT DATEADD(MINUTE, #TimeInterval, Convert(Datetime,CONCAT(DATEPART(YEAR, Value),'-', DATEPART(MONTH, Value),
'-', DATEPART(DAY, Value),' ', DATEPART(HOUR, Value),':', ((DATEPART(MINUTE, Value) / #TimeInterval) * #TimeInterval),':00'),120)) as Period,
ISNULL(COUNT(*), 0) AS NumberOfVisitors
FROM #T
WHERE Value >= #StartTime AND Value < #EndTime
GROUP BY Convert(Datetime,CONCAT(DATEPART(YEAR, Value),'-', DATEPART(MONTH, Value), '-', DATEPART(DAY, Value),' ',
DATEPART(HOUR, Value),':',((DATEPART(MINUTE, Value) / #TimeInterval) * #TimeInterval),':00'),120)
ORDER BY Period
Result for 30 min
2015-01-01 08:30:00.000 | 4
2015-01-01 09:30:00.000 | 3
2015-01-01 11:30:00.000 | 1
2015-01-01 12:30:00.000 | 1
2015-01-01 13:30:00.000 | 1
2015-01-02 08:30:00.000 | 2
2015-01-03 08:30:00.000 | 1
2015-01-04 08:30:00.000 | 1
Result for 60 min
2015-01-01 08:30:00.000 | 4
2015-01-01 09:30:00.000 | 3
2015-01-01 11:30:00.000 | 1
2015-01-01 12:30:00.000 | 1
2015-01-01 13:30:00.000 | 1
2015-01-02 08:30:00.000 | 2
2015-01-03 08:30:00.000 | 1
2015-01-04 08:30:00.000 | 1
Thanks in advance!
You don't want datepart() for this purpose. You want a minutes count. One way is to use datediff():
SELECT datediff(minute, #StartTime, value) / #TimeInterval as minutes,
COUNT(*) AS NumberOfVisitors
FROM #T
WHERE Value >= #StartTime AND Value < #EndTime
GROUP BY datediff(minute, #StartTime, value) / #TimeInterval
ORDER BY minutes ;
SQL Server does integer division, so you don't have to worry about remainders in this case. Also, COUNT(*) cannot return NULL, so neither ISNULL() nor COALESCE() is appropriate.
Or, to get a date/time value:
SELECT dateadd(day,
datediff(minute, #StartTime, value) / #TimeInterval,
#StartTime) as period,
COUNT(*) AS NumberOfVisitors
FROM #T
WHERE Value >= #StartTime AND Value < #EndTime
GROUP BY datediff(minute, #StartTime, value) / #TimeInterval
ORDER BY period ;

SQL Server grouping a timestamp by hour but keep as date format (don't want to pull hour out)

When I want to group a bunch of time stamps by day, by
CONVERT (datetime, CONVERT (varchar, dbo.MEASUREMENT_Battery.STAMP, 101))
it produces for me a "day" stamp that SQL Server still views as a date and can be sorted and used as such.
What I'm trying to figure out is if it's possible to do the same thing by hour. I tried
CAST(DATEPART(Month, STAMP) AS varchar) + '/' + CAST(DATEPART(Day, STAMP) AS varchar) + '/' + CAST(DATEPART(Year, STAMP) AS varchar) + ' ' + CAST(DATEPART(Hour, STAMP) AS varchar) + ':00:00.000'
and this "works" but SQL Server doesn't view this as a date anymore so I can't sort properly.
The end result I want is right though: ex: 9/9/2015 9:00:00.000
Do NOT convert into a string, until you absolutely have to "present" the result.
CONVERT() or FORMAT() return string representations of temporal information
The following method returns a datetime value truncated to the hour without resorting to string manipulation (and hence fast).
select
dateadd(hour, datediff(hour,0, dbo.MEASUREMENT_Battery.STAMP ), 0)
, count(*)
from dbo.MEASUREMENT_Battery
group by
dateadd(hour, datediff(hour,0, dbo.MEASUREMENT_Battery.STAMP ), 0)
SQL Fiddle
MS SQL Server 2014 Schema Setup:
CREATE TABLE MEASUREMENT_Battery
([STAMP] datetime)
;
INSERT INTO MEASUREMENT_Battery
([STAMP])
VALUES
('2015-11-12 07:40:15'),
('2015-11-12 08:40:15'),
('2015-11-12 09:40:15'),
('2015-11-12 10:40:15'),
('2015-11-12 11:40:15'),
('2015-11-12 12:40:15'),
('2015-11-12 13:40:15'),
('2015-11-12 14:40:15')
;
NOTE: the output below for column [Stamp] is the default display
Results:
| | |
|----------------------------|---|
| November, 12 2015 07:00:00 | 1 |
| November, 12 2015 08:00:00 | 1 |
| November, 12 2015 09:00:00 | 1 |
| November, 12 2015 10:00:00 | 1 |
| November, 12 2015 11:00:00 | 1 |
| November, 12 2015 12:00:00 | 1 |
| November, 12 2015 13:00:00 | 1 |
| November, 12 2015 14:00:00 | 1 |
If you absolutely insist on dipay of a date/time value a paricular way, then you may add the display format in the select clause (but not needed in the group by clause!)
select
FORMAT(dateadd(hour, datediff(hour,0, dbo.MEASUREMENT_Battery.STAMP ), 0) , 'MM/dd/yyyy HH')
, count(*)
from dbo.MEASUREMENT_Battery
group by
dateadd(hour, datediff(hour,0, dbo.MEASUREMENT_Battery.STAMP ), 0)
What happens is that when you use the DateTime Style 101 (at the end of the second CONVERT) the Date will be converted to mm/dd/yyyy and the time to 00:00:00.000 always as stated here an:
https://msdn.microsoft.com/en-us/library/ms187928.aspx
Now, from what I understand from your question is that you would like to include the hour as well and this can be done like this:
SELECT FORMAT(STAMP , 'MM/dd/yyyy HH') + ':00:00.000'
Note:
':00:00.000' is optional and is just for a nicer output.
This only works in SQL Server 2012 and later version.
Testing with some test date we will see that we get the expected result:
-- Drop temp table if it exists
IF OBJECT_ID('tempdb..#T') IS NOT NULL DROP TABLE #T
-- Create temp table
CREATE TABLE #T ( myDate DATETIME )
-- Insert dummy values
INSERT INTO #T VALUES ( '2015-12-25 14:00:00.000' ) -- 14
INSERT INTO #T VALUES ( '2015-12-25 14:00:00.000' ) -- 14
INSERT INTO #T VALUES ( '2015-12-25 15:00:00.000' )
INSERT INTO #T VALUES ( '2015-12-25 16:00:00.000' )
INSERT INTO #T VALUES ( '2015-12-25 17:00:00.000' ) -- 17
INSERT INTO #T VALUES ( '2015-12-25 17:00:00.000' ) -- 17
-- Select query
SELECT COUNT( myDate ), MAX( FORMAT( myDate , 'MM/dd/yyyy HH') + ':00:00.000' ) FROM #T
GROUP BY DATEPART( hour, myDate )
Output:
2 12/25/2015 14:00:00.000
1 12/25/2015 15:00:00.000
1 12/25/2015 16:00:00.000
2 12/25/2015 17:00:00.000

Returning Distinct Dates

Morning
I am trying to return the distinct dates of an outcome by a unique identifer.
For example:
ID|Date
1 | 2011-10-01 23:57:59
1 | 2011-10-01 23:58:59
1 | 2010-10-01 23:59:59
2 | 2010-09-01 23:59:59
2 | 2010-09-01 23:58:29
3 | 2010-09-01 23:58:39
3 | 2010-10-01 23:59:14
3 | 2010-10-01 23:59:36
The times are not important just the dates. So for example on ID 1 I can't do a distinct on the ID as that would return only one of my dates. So I would want to return:
1|2011-10-01
1|2010-10-01
I Have tried the following query:
Drop Table #Temp
select Distinct DateAdd(dd, DateDiff(DD,0, Date),0) as DateOnly
,ID
Into #Temp
From Table
Select Distinct (Date)
,ID
From #Temp
I am getting the following results however:
ID|Date
1 | 2011-10-01 00:00:00
1 | 2011-10-01 00:00:00
1 | 2010-10-01 00:00:00
I'm new to SQL so apologies I may have made a glaring mistake. I have got so far by searching through the previously asked questions.
As always any help and pointers is greatly appreciated.
You can use the T-SQL convert function to extract the Date.
Try
CONVERT(char(10), GetDate(),126)
so, in your case, do
Drop Table #Temp
select Distinct CONVERT(char(10), DateAdd(dd, DateDiff(DD,0, Date),0), 126) as DateOnly
,ID
Into #Temp
From Table
Select Distinct (Date)
,ID
From #Temp
further informations: Getting the Date Portion of a SQL Server Datetime field
hope this helps!
If you are using Sql Server 2008 - you can cast DateTime column to a built in Date type , otherwise to get rid of time you should cast to VARCHAR() only day/month/year parts and then convert back to datetime so time part would be zeroed:
declare #dates table(id int, dt datetime)
INSERT INTO #dates VALUES(1, '2011-10-01 23:57:49')
INSERT INTO #dates VALUES(2, '2011-10-02 23:57:59')
INSERT INTO #dates VALUES(2, '2011-10-02 23:57:39')
SELECT stripped.id, stripped.dateOnly
FROM
(
-- this will return dates with zeroed time part 2011-10-01 00:00:00.000
SELECT id,
CONVERT(datetime,
CAST(YEAR(dt) as VARCHAR(4)) + '-' +
CAST(MONTH(dt) AS VARCHAR(2)) + '-' +
CAST(DAY(dt) AS VARCHAR(2))) as dateOnly
FROM #dates
) stripped
GROUP BY stripped.id, stripped.dateOnly