I need help for a project of equipment follow-up.
I have a SQL table with 3 columns (Equipment Name, Status, Date of Status Change (DATETIME format)).
EQUIPMENT
STATUS
CHANGEDATE
EQUIPMENT-1
QUALIFICATION
2020-06-30 09:37:42
EQUIPMENT-1
WAIT REPAIR
2020-06-30 16:29:20
EQUIPMENT-1
UP
2020-07-27 14:19:33
EQUIPMENT-1
ENGINEERING
2020-09-18 15:25:01
EQUIPMENT-1
UP
2020-09-20 17:31:53
The idea is to determine the elapsed time of each equipment in each status between 2 fixed dates.
For example, I would like to know the elpased time of EQUIPMENT-1 in all the status between the 2020-07-01 and the 2020-10-01 with a table result something like this
STATUS
ELAPSED TIME (in days)
WAIT REPAIR
26,60
UP
63,31 (10,27 + 53,05)
ENGINEERING
2,09
Today I have a C# code which calculates theses elapsed times, but it's slow...
So i would to know if it's easy to replace this process by a SQL query.
Thanks for your help,
I think you want lead() and aggregation:
select equipment, status,
sum(datediff(minute,
changedate,
coalesce(next_changedate, '2020-10-01')
) / (24 * 60.0)
) as decimal_days
from (select t.*,
lead(changedate) over (partition by equipment order by changedate)
from t
where changedate >= '2020-07-01' and changedate < '2020-10-01'
) t
group by equipment, status;
EDIT:
If you need the initial time as well:
select equipment, status,
sum(datediff(minute,
(case when changedate < '2020-07-01' then '2020-07-01' else changeddate end),
coalesce(next_changedate, '2020-10-01')
) / (24 * 60.0)
) as decimal_days
from (select t.*,
lead(changedate) over (partition by equipment order by changedate) as next_changedate
from t
) t
where changedate >= '2020-07-01' and changedate < '2020-10-01' or
next_changedate >= '2020-07-01' and next_changedate < '2020-10-01' or
group by equipment, status;
Related
I have the following data:
ID
username
Cost
time
1
test1
1
2021-05-22 11:48:36.000
2
test2
2
2021-05-20 12:55:22.000
3
test3
5
2021-05-21 00:00:0-0.000
I would like to count the costs for the username with a daily figure and a month to date figure in once table
I have got the following
SELECT
username,
COUNT(*) cost
FROM
dbo.sent
WHERE
time BETWEEN {ts '2021-05-01 00:00:00'} AND {ts '2021-05-22 23:59:59'}
GROUP BY
username
ORDER BY
cost DESC;
This will return the monthly figures and changing the time to '2021-05-22 00:00:00'} AND {ts '2021-05-22 23:59:59'} will give the the daily however I would like a table to show daily and MTD together
username
Daily
MTD
test1
1
1012
test2
2
500
test3
5
22
Any help or pointers would be fantastic, I am guessing that I need a temp table and then run again using the MTD range updating the temp table where the username is the same then to export and delete - but i have no idea where to start.
First, you need one row per user and date, not just one row per user.
Second, you should fix your date arithmetic so you are not missing a second.
Then, you can use aggregation and window functions:
SELECT username, CONVERT(DATE, time) as dte,
COUNT(*) as cost_per_day,
SUM(COUNT(*)) OVER (PARTITION BY username ORDER BY CONVERT(DATE, time)) as mtd
FROM dbo.sent s
WHERE time >= '2021-05-01' AND
time < '2021-05-23'
GROUP BY username, CONVERT(DATE, time)
ORDER BY username, dte;
You can learn more about window functions in a SQL tutorial (or, ahem, a book on SQL) or in the documentation.
EDIT:
If you only want the most recent date and MTD, then you can either filter the above query for the most recent date or use conditional aggregation:
SELECT username,
SUM(CASE WHEN CONVERT(DATE, time) = '2021-05-22' THEN 1 ELSE 0 END) as cost_per_most_recent_day,
COUNT(*) as MTD
FROM dbo.sent s
WHERE time >= '2021-05-01' AND
time < '2021-05-23'
GROUP BY username
ORDER BY username;
And, you can actually express the query using the current date so it doesn't have to be hardcoded. For this version:
SELECT username,
SUM(CASE WHEN CONVERT(DATE, time) = CONVERT(DATE, GETDATE()) THEN 1 ELSE 0 END) as cost_per_most_recent_day,
COUNT(*) as MTD
FROM dbo.sent s
WHERE time >= DATEFROMPARTS(YEAR(GETDATE()), MONTH(GETDATE()), 1) AND
time < DATEADD(DAY, 1, CONVERT(DATE, GETDATE()))
GROUP BY username
ORDER BY username;
Code is
select customerid, count(campaignid) as T, Convert (varchar, CreatedOn,23) from customerbenefits
where campaignid='6EDBB808-1A91-4B1D-BE1D-27EF15C5D4C7'
and createdon between '2019-09-01' and '2019-10-01'
group by customerid,CreatedOn
having count(campaignid)>1
order by createdon desc
Result is
-- id / count /time
--18655680-3B5E-4001-1984-00000000 / 12 /2019-09-30
--18655680-3B5E-4001-1984-00000000 / 7 / 2019-09-30
--18655680-3B5E-4001-1984-00000000 / 6 / 2019-09-30
I want result as
-- id / count / time
-- 18655680-3B5E-4001-1984-00000000 / 25/ 2019-09-30
I want it grouped to time filter and sum counts.
How can I change my query?
Use two levels of aggregation:
select customerid, dte, sum(T)
from (select customerid, count(*) as T, convert(varchar(255), CreatedOn, 23) as dte
from customerbenefits
where campaignid = '6EDBB808-1A91-4B1D-BE1D-27EF15C5D4C7' and
createdon >= '2019-09-01' and
createdon < '2019-10-01'
group by customerid, CreatedOn
having count(*) > 1
) t
group by customerid, dte
order by createdon desc ;
Notice that I changed the date comparisons so midnight on 2019-10-01 is not included in the data for September.
I'm working on a down time management system that is capable of saving support tickets for problems in a database, my database has the following columns:
-ID
-DateOpen
-DateClosed
-Total
I want to obtain the sum of minutes in a day, taking into account that the tickets can be simultaneous, for example:
ID | DateOpen | DateClosed | Total
1 2019-04-01 08:00:00 AM 2019-04-01 08:45:00 45
2 2019-04-01 08:10:00 AM 2019-04-01 08:20:00 10
3 2019-04-01 09:06:00 AM 2019-04-01 09:07:00 1
4 2019-04-01 09:06:00 AM 2019-04-01 09:41:00 33
Someone can helpme with that please!! :c
If I use the query "SUM", it will return 89, but if you see the dates, you will understand that the actual result must be 78 because the tickets 2 and 3 were launched while another ticket was working ...
DECLARE #DateOpen date = '2019-04-01'
SELECT AlarmID, DateOpen, DateClosed, TDT FROM AlarmHistory
WHERE CONVERT(date,DateOpen) = #DateOpen
What you need to do is generate a sequence of integers and use that to generate times of the day. Join that sequence of times on between your open and close dates, then count the number of distinct times.
Here is an example that will work with MySQL:
SET #row_num = 0;
SELECT COUNT(DISTINCT time_stamp)
-- this simulates your dateopen and dateclosed table
FROM (SELECT '2019-04-01 08:00:00' open_time, '2019-04-01 08:45:00' close_time
UNION SELECT '2019-04-01 08:10:00', '2019-04-01 08:20:00'
UNION SELECT '2019-04-01 09:06:00', '2019-04-01 09:07:00'
UNION SELECT '2019-04-01 09:06:00', '2019-04-01 09:41:00') times_used
JOIN (
-- generate sequence of minutes in day
SELECT TIME(sequence*100) time_stamp
FROM (
-- create sequence 1 - 10000
SELECT (#row_num:=#row_num + 1) AS sequence
FROM {table_with_10k+_records}
LIMIT 10000
) minutes
HAVING time_stamp IS NOT NULL
LIMIT 1440
) times ON (time_stamp >= TIME(open_time) AND time_stamp < TIME(close_time));
Since you are selecting only distinct times that are found in the result, minutes that overlap will not be counted.
NOTE: Depending on your database, there may be a better way to go about generating a sequence. MySQL does not have a generate sequence function I did it this way to show the basic idea that can easily be converted to work with whatever database you are using.
#drakin8564's answer adapted for SQL Server which I believe you're using:
;WITH Gen AS
(
SELECT TOP 1440
CONVERT(TIME, DATEADD(minute, ROW_NUMBER() OVER (ORDER BY (SELECT NULL)), '00:00:00')) AS t
FROM sys.all_objects a1
CROSS
JOIN sys.all_objects a2
)
SELECT COUNT(DISTINCT t)
FROM incidents inci
JOIN Gen
ON Gen.t >= CONVERT(TIME, inci.DateOpen)
AND Gen.t < CONVERT(TIME, inci.DateClosed)
Your total for the last record is wrong, says 33 while it's 35, so the query results in 80, not 78.
By the way, just as MarcinJ told you, 41 - 6 is 35, not 33. So the answer is 80, not 78.
The following solution would work even if the date parameter is not one day only (1,440 minutes). Say if the date parameter is a month, or even year, this solution would still work.
Live demo: http://sqlfiddle.com/#!18/462ac/5
-- arranged the opening and closing downtime
with a as
(
select
DateOpen d, 1 status
from dt
union all
select
DateClosed, 2
from dt
)
-- don't compute the downtime from previous date
-- if the current date's status is opened
-- yet the previous status is closed
, downtime_minutes AS
(
select
*,
lag(status) over(order by d, status desc) as prev_status,
case when status = 1 and lag(status) over(order by d, status desc) = 2 then
null
else
datediff(minute, lag(d) over(order by d, status desc), d)
end as downtime
from a
)
select sum(downtime) as all_downtime from downtime_minutes;
Output:
| all_downtime |
|--------------|
| 80 |
See how it works:
It works by computing the downtime from previous downtime. Don't compute downtime if the current date's status is open and the previous date's status is closed, which means the current downtime is a non-overlapping one. Non-overlapping downtime are denoted by null.
For that new downtime opened, its downtime is null initially, downtime will be computed on succeeding dates up to when it is closed.
Can make the code shorter by reversing the condition:
-- arranged the opening and closing downtime
with a as
(
select
DateOpen d, 1 status
from dt
union all
select
DateClosed, 2
from dt
-- order by d. postgres can do this?
)
-- don't compute the downtime from previous date
-- if the current date's status is opened
-- yet the previous status is closed
, downtime_minutes AS
(
select
*,
lag(status) over(order by d, status desc) as prev_status,
case when not ( status = 1 and lag(status) over(order by d, status desc) = 2 ) then
datediff(minute, lag(d) over(order by d, status desc), d)
end as downtime
from a
)
select sum(downtime) from downtime_minutes;
Not particularly proud of my original solution: http://sqlfiddle.com/#!18/462ac/1
As for the status desc on order by d, status desc, if a DateClosed is similar to other downtime's DateOpen, status desc will sort the DateClosed first.
For this data where 8:00 is present on both DateOpened and DateClosed:
INSERT INTO dt
([ID], [DateOpen], [DateClosed], [Total])
VALUES
(1, '2019-04-01 07:00:00', '2019-04-01 07:50:00', 50),
(2, '2019-04-01 07:45:00', '2019-04-01 08:00:00', 15),
(3, '2019-04-01 08:00:00', '2019-04-01 08:45:00', 45);
;
For similar time (e.g., 8:00), if we will not sort the closing first before the open, then 7:00 will be computed up to 7:50 only, instead of up to 8:00, as 8:00-open's downtime is initially zero. Here's how the open and closed downtimes are arranged and computed if there's no status desc for similar date, e.g., 8:00. The total downtime is 95 minutes only, which is wrong. It should be 105 minutes.
Here's how that will be arranged and computed if we sort the DateClosed first before the DateOpen (by using status desc) when they have similar date, e.g., 8:00. The total downtime is 105 minutes, which is correct.
Another approach, uses gaps and islands approach. Answer is based on SQL Time Packing of Islands
Live test: http://sqlfiddle.com/#!18/462ac/11
with gap_detector as
(
select
DateOpen, DateClosed,
case when
lag(DateClosed) over (order by DateOpen) is null
or lag(DateClosed) over (order by DateOpen) < DateOpen
then
1
else
0
end as gap
from dt
)
, downtime_grouper as
(
select
DateOpen, DateClosed,
sum(gap) over (order by DateOpen) as downtime_group
from gap_detector
)
-- group's open and closed detector. then computes the group's downtime
select
downtime_group,
min(DateOpen) as group_date_open,
max(DateClosed) as group_date_closed,
datediff(minute, min(DateOpen), max(DateClosed)) as group_downtime,
sum(datediff(minute, min(DateOpen), max(DateClosed)))
over(order by downtime_group) as downtime_running_total
from downtime_grouper
group by downtime_group
Output:
How it works
A DateOpen is the start of the series of downtime if it has no previous downtime (indicated by null lag(DateClosed)). A DateOpen is also a start of the series of downtime if it has a gap from the previous downtime's DateClosed.
with gap_detector as
(
select
lag(DateClosed) over (order by DateOpen) as previous_downtime_date_closed,
DateOpen, DateClosed,
case when
lag(DateClosed) over (order by DateOpen) is null
or lag(DateClosed) over (order by DateOpen) < DateOpen
then
1
else
0
end as gap
from dt
)
select *
from gap_detector
order by DateOpen;
Output:
After detecting the gap starters, we do a running total of the gap so we can group downtimes that are contiguous to each other.
with gap_detector as
(
select
DateOpen, DateClosed,
case when
lag(DateClosed) over (order by DateOpen) is null
or lag(DateClosed) over (order by DateOpen) < DateOpen
then
1
else
0
end as gap
from dt
)
select
DateOpen, DateClosed, gap,
sum(gap) over (order by DateOpen) as downtime_group
from gap_detector
order by DateOpen;
As we can see from the output above, we can now easily detect the downtime group's earliest DateOpen and latest DateClosed by applying MIN(DateOpen) and MAX(DateClosed) by grouping on downtime_group. On downtime_group 1, we have earliest DateOpen of 08:00 and latest DateClosed of 08:45. On downtime_group 2, we have earliest DateOpen of 09:06 and latest DateClosed of 9:41. And from that we can recalculate the correct downtime even if there are simultaneous downtimes.
We can make the code shorter by eliminating the detection of null previous downtime (the current row we are evaluating is the firstmost row in the table) by reversing the logic. Instead of detecting the gaps, we detect the islands (contiguous downtimes). Something is contiguous if the previous downtime's DateClosed overlaps the DateOpen of the current downtime, denoted by 0. If it does not overlaps, then it is a gap, denoted by 1.
Here's the query:
Live test: http://sqlfiddle.com/#!18/462ac/12
with gap_detector as
(
select
DateOpen, DateClosed,
case when lag(DateClosed) over (order by DateOpen) >= DateOpen
then
0
else
1
end as gap
from dt
)
, downtime_grouper as
(
select
DateOpen, DateClosed,
sum(gap) over (order by DateOpen) as downtime_group
from gap_detector
)
-- group's open and closed detector. then computes the group's downtime
select
downtime_group,
min(DateOpen) as group_date_open,
max(DateClosed) as group_date_closed,
datediff(minute, min(DateOpen), max(DateClosed)) as group_downtime,
sum(datediff(minute, min(DateOpen), max(DateClosed)))
over(order by downtime_group) as downtime_running_total
from downtime_grouper
group by downtime_group
If you are using SQL Server 2012 or higher:
iif(lag(DateClosed) over (order by DateOpen) >= DateOpen, 0, 1) as gap
I'm using SQL Server 2008.
I have table constructed the following way:
Date (datetime)
TimeIn (datetime)
TimeOut (datetime)
UserReference (nvarchar)
LocationID
My desired results are: For every hour between hour 7 (7am) and hour 18 (6pm) I want to know the user who had the highest (TimeIn - TimeOut) for every location. -last condition is optional-
So I've got an aggregated function which calculates the datediff in seconds between TimeOut and TimeIn aliased as Total
I want my results to look a bit like this:
Hour 7 | K1345 | 50 | Place #5
Hour 7 | K3456 | 10 | Place #4
Hour 8 | K3333 | 5 | Place #5
etc.
What I've tried so far:
A CTE using the ROW_NUMBER() function, partitioning by my aggregated column and ordering by it. This only returns one row.
A CTE where I do all my aggregations (including datepart(hour,date)) and use the max aggregation to get the highest total time in my outer query.
I know I have to do it with a CTE somehow, I'm just not exactly sure how to join the cte and my outer query.
Am I on the right track using a ROW_NUMBER() or Rank()?
Queries I've tried:
WITH cte as
(
SELECT * ,
rn = ROW_NUMBER() over (partition by datediff(second, [TimeIn], [TimeOut])order by datediff(second, [TimeIn], [TimeOut]) desc)
FROM TimeTable (nolock)
where DateCreated > '20131023 00:00:00' and DateCreated < '20131023 23:59:00'
)
SELECT datepart(hour,cte.DateCreated) as hour,cte.UserReference,(datediff(second, [TimeIn], [TimeOut])) as [Response Time],LocationID
from cte
where cte.rn = 1
and DATEPART(hh,datecreated) >= 7 and DATEPART(hh,datecreated) <= 18
order by hour asc
This only returns a few rows
something else I've tried:
with cte as
(
SELECT Datecreated as Date,
UserReference as [User],
datediff(second, [TimeIn], [TimeOut]) as Time,
LocationID as Location
FROM TimeTable
WHERE datecreated... --daterange
)
SELECT DATEPART(HOUR,date), cte.[User], MAX(Time), Location
FROM cte
WHERE DATEPART(hh,datecreated) >= 7 and DATEPART(hh,datecreated) <= 18
GROUP BY DATEPART(HOUR,date), cte.[User], Location
Row of sample data
Date UserRef TimeIn TimeOut locationid
2013-10-23 06:26:12.783 KF34334 2013-10-23 06:27:07.000 2013-10-23 06:27:08.000 10329
I hope this will help
WITH TotalTime AS (
SELECT
CAST(DateCreated AS DATE) as [date]
,DATEPART(hour,DateCreated) AS [hour]
,SUM(DATEDIFF(second,TimeIn,TimeOut)) AS Total
,UserReference
,locationid
FROM TimeTable
GROUP BY UserReference,locationid,CAST(DateCreated AS DATE),DATEPART(hour,DateCreated)
HAVING DATEPART(hh,DateCreated) >= 7 and DATEPART(hh,DateCreated) <= 18
)
, rn AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY [date],[hour],locationid ORDER BY Total DESC) AS row_num
FROM TotalTime
)
SELECT *
FROM rn
WHERE row_num = 1
I have a table in a with the following structure:
CustID --- DateAdded ---
396 2012-02-09
396 2012-02-09
396 2012-02-08
396 2012-02-07
396 2012-02-07
396 2012-02-07
396 2012-02-06
396 2012-02-06
I would like to know how I can count the number of records per day, for the last 7 days in SQL and then return this as an integer.
At present I have the following SQL query written:
SELECT *
FROM Responses
WHERE DateAdded >= dateadd(day, datediff(day, 0, GetDate()) - 7, 0)
RETURN
However this only returns all entries for the past 7 days. How can I count the records per day for the last 7 days?
select DateAdded, count(CustID)
from Responses
WHERE DateAdded >=dateadd(day,datediff(day,0,GetDate())- 7,0)
GROUP BY DateAdded
select DateAdded, count(CustID)
from tbl
group by DateAdded
about 7-days interval it's DB-depending question
SELECT DateAdded, COUNT(1) AS NUMBERADDBYDAY
FROM Responses
WHERE DateAdded >= dateadd(day,datediff(day,0,GetDate())- 7,0)
GROUP BY DateAdded
This one is like the answer above which uses the MySql DATE_FORMAT() function. I also selected just one specific week in Jan.
SELECT
DatePart(day, DateAdded) AS date,
COUNT(entryhash) AS count
FROM Responses
where DateAdded > '2020-01-25' and DateAdded < '2020-02-01'
GROUP BY
DatePart(day, DateAdded )
If your timestamp includes time, not only date, use:
SELECT DATE_FORMAT('timestamp', '%Y-%m-%d') AS date, COUNT(id) AS count FROM table GROUP BY DATE_FORMAT('timestamp', '%Y-%m-%d')
You could also try this:
SELECT DISTINCT (DATE(dateadded)) AS unique_date, COUNT(*) AS amount
FROM table
GROUP BY unique_date
ORDER BY unique_date ASC
SELECT count(*), dateadded FROM Responses
WHERE DateAdded >=dateadd(day,datediff(day,0,GetDate())- 7,0)
group by dateadded
RETURN
This will give you a count of records for each dateadded value. Don't make the mistake of adding more columns to the select, expecting to get just one count per day. The group by clause will give you a row for every unique instance of the columns listed.
select DateAdded, count(DateAdded) as num_records
from your_table
WHERE DateAdded >=dateadd(day,datediff(day,0,GetDate())- 7,0)
group by DateAdded
order by DateAdded
Unfortunately the best answer here IMO is a comment by #Profex on an incorrect answer , but the solution I went with is
SELECT FORMAT(DateAdded, 'yyyy-MM-dd'), count(CustID)
FROM Responses
WHERE DateAdded >= dateadd(day,datediff(day,0,GetDate())- 7,0)
GROUP BY FORMAT(DateAdded, 'yyyy-MM-dd')
ORDER BY FORMAT(DateAdded, 'yyyy-MM-dd')
Note that I haven't tested this SQL since I don't have the OP's DB , but this approach works well in my scenario where the date is stored to the second
The important part here is using the FORMAT(DateAdded, 'yyyy-MM-dd') method to drop the time without losing the year and month , as would happen if you used DATEPART(day, DateAdded)
When a day among last 7 days, has no record means, the following code will list out that day with count as zero.
DECLARE #startDate DATE = GETDATE() - 6,
#endDate DATE = GETDATE();
DECLARE #daysTable TABLE
(
OrderDate date
)
DECLARE #daysOrderTable TABLE
(
OrderDate date,
OrderCount int
)
Insert into #daysTable
SELECT TOP (DATEDIFF(DAY, #startDate, #endDate) + 1)
Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.object_id) - 1, #startDate)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b;
Insert into #daysOrderTable
select OrderDate, ISNULL((SELECT COUNT(*) AS OdrCount
FROM [dbo].[MyOrderTable] odr
WHERE CAST(odr.[CreatedDate] as date) = dt.OrderDate
group by CAST(odr.[CreatedDate] as date)
), 0) AS OrderCount from #daysTable dt
select * from #daysOrderTable
RESULT
OrderDate OrderCount
2022-11-22 42
2022-11-23 6
2022-11-24 34
2022-11-25 0
2022-11-26 28
2022-11-27 0
2022-11-28 22
SELECT DATE_FORMAT(DateAdded, '%Y-%m-%d'),
COUNT(CustID)
FROM Responses
GROUP BY DATE_FORMAT(DateAdded, '%Y-%m-%d');