How to count employees per hour working in between intime and outtime hours.
I have below table format with intime,outtime of employee .
My Table :
emp_reader_id att_date in_time out_time Shift_In_Time Shift_Out_Time
111 2020-03-01 2020-03-01 08:55:24.000 2020-03-01 10:26:56.000 09:00:00.0000000 10:30:00.0000000
112 2020-03-01 2020-03-01 08:45:49.000 2020-03-01 11:36:14.000 09:00:00.0000000 11:30:00.0000000
113 2020-03-01 2020-03-01 10:58:19.000 2020-03-01 13:36:31.000 09:00:00.0000000 12:00:00.0000000
Need to count the employee in the below format.
Expected Output:
Period Working Employee Count
0 - 1 0
1 - 2 0
2 - 3 0
3 - 4 0
4 - 5 0
5 - 6 0
6 - 7 0
7 - 8 0
8 - 9 2
9 - 10 2
10 - 11 3
11 - 12 2
12 - 13 1
13 - 14 1
14 - 15 0
15 - 16 0
16 - 17 0
17 - 18 0
18 - 19 0
19 - 20 0
20 - 21 0
21 - 22 0
22 - 23 0
23 - 0 0
I tried with below query with my raw data , but it will not work i need from above table
SELECT
(DATENAME(hour, C.DT) + ' - ' + DATENAME(hour, DATEADD(hour, 2, C.DT))) as PERIOD,
Count(C.EVENTID) as Emp_Work_On_Time
FROM
trnevents C
WHERE convert(varchar(50),C.DT,23) ='2020-03-01'
GROUP BY (DATENAME(hour, C.DT) + ' - ' +
DATENAME(hour, DATEADD(hour, 2, C.DT)))
you need to have a list of hours (0 to 23) and then left join to your table.
The following query uses recursive cte to generate that list. You may also use VALUES constructor or TALLY table. Which will gives same effect
; with hours as
(
select hour = 0
union all
select hour = hour + 1
from hours
where hour < 23
)
select convert(varchar(2), h.hour) + ' - ' + convert(varchar(2), (h.hour + 1) % 24) as [Period],
count(t.emp_reader_id) as [Working Employee Count]
from hours h
left join timesheet t on h.hour >= datepart(hour, in_time)
and h.hour <= datepart(hour, out_time)
group by h.hour
Demo : db<>fiddle
Hope that might help but take a look how shift in and shift out are in the code... seems to me its automatic so it could have all you need
SELECT COUNT(Idemp) from aaShiftCountEmp WHERE in_time<'2020-03-01 09:00:00.000' AND out_time>'2020-03-01 10:00:00.000'
this is just example for 9h to 10h but u can make it auto,
btw are u sure that this shoul not show SHIFT ppl cOUNT? i mean u sure 0-1, 1-2 instead of 0-1.30, 1.30-3?? etc?
Related
I have a dataset from oracle db that looks something like this:
ticket_num start_date repair_date
1 1/1/2021 02:05:15 1/4/2021 09:30:00
2 1/2/2021 12:15:45 1/2/2021 14:03:00
3 1/2/2021 12:20:00 1/2/2021 13:54:00
I need to calculate the number of active tickets in an hour time slot. So if the ticket was opened before that hour, and closed after the hour it would be counted. All days and hours need to be represented regardless if there are active tickets open during that time. The expected output is:
month day hour #active_tix
1 1 2 1
1 1 3 1
...
1 2 12 3
1 2 13 3
1 2 14 2
1 2 15 1
...
1 4 9 1
1 4 10 0
Any help would be greatly appreciated.
You need a calendar table. In the query below it is created on the fly
select c.hstart, count(t.ticket_num) n
from (
-- create calendar on the fly
select timestamp '2021-01-01 00:00:00' + NUMTODSINTERVAL(level-1, 'hour') hstart
from dual
connect by timestamp '2021-01-01 00:00:00' + NUMTODSINTERVAL(level-1, 'hour') < timestamp '2022-01-01 00:00:00'
) c
left join mytable t on t.start_date < c.hstart and t.repair_date >= c.hstart
group by c.hstart
order by c.hstart
Is there a way using TSQL to convert an integer to year, month and days
for e.g. 365 converts to 1year 0 months and 0 days
366 converts to 1year 0 months and 1 day
20 converts to 0 year 0 months and 20 days
200 converts to 0 year 13 months and 9 days
408 converts to 1 year 3 months and 7 days .. etc
I don't know of any inbuilt way in SQL Server 2008, but the following logic will give you all the pieces you need to concatenate the items together:
select
n
, year(dateadd(day,n,0))-1900 y
, month(dateadd(day,n,0))-1 m
, day(dateadd(day,n,0))-1 d
from (
select 365 n union all
select 366 n union all
select 20 n union all
select 200 n union all
select 408 n
) d
| n | y | m | d |
|-------|---|---|----|
| 365 | 1 | 0 | 0 |
| 366 | 1 | 0 | 1 |
| 20 | 0 | 0 | 20 |
| 200 | 0 | 6 | 19 |
| 408 | 1 | 1 | 12 |
Note that zero used in in the DATEDADD function is the date 1900-01-01, hence 1900 is deducted from the year calculation.
Thanks to Martin Smith for correcting my assumption about the leap year.
You could try without using any functions just by dividing integer values if we consider all months are 30 days:
DECLARE #days INT;
SET #days = 365;
SELECT [Years] = #days / 365,
[Months] = (#days % 365) / 30,
[Days] = (#days % 365) % 30;
#days = 365
Years Months Days
1 0 0
#days = 20
Years Months Days
0 0 20
I have a sub request which returns this:
item_id, item_datetime, item_duration_in_days
1, '7-dec-2016-12:00', 3
2, '8-dec-2016-11:00', 4
3, '20-dec-2016-05:00', 10
4, '2-jan-2017-14:00', 50
5, '29-jan-2017-22:00', 89
I want to get "item_id" which falls into "now()". For that the algorithm is:
1) var duration_days = interval 'item_duration_in_days[i]'
2) for the very first item:
new_datetime[i] = item_datetime[i] + duration_days
3) for others:
- if a new_datetime from the previous step overlaps with the current item_datetime[i]:
new_datetime[i] = new_datetime[i - 1] + duration_days
- else:
new_datetime[i] = item_datetime[i] + duration_days
4) return an item for each iteration:
{id, item_datetime, new_datetime}
That is, there'll be something like:
item_id item_datetime new_datetime
1 7 dec 2016 10 dec 2016
2 11 dec 2016 15 dec 2016
3 20 dec 2016 30 dec 2016
4 2 jan 2017 22 feb 2017 <------- found because now() == Feb 5
5 22 feb 2017 21 may 2017
How can I do that? I think it should be something like "fold" function. Can it be done via an sql request? Or will have to be an PSQL procedure for intermediate variable storage?
Or please give pointers how to calculate that.
If I understand correctly your task, you need recursive call. Function take first row at first and process each next.
WITH RECURSIVE x AS (
SELECT *
FROM (
SELECT item_id,
item_datetime,
item_datetime + (item_duration_in_days::text || ' day')::interval AS cur_end
FROM ti
ORDER BY item_datetime
LIMIT 1
) AS first
UNION ALL
SELECT item_id,
cur_start,
cur_start + (item_duration_in_days::text || ' day')::interval
FROM (
SELECT item_id,
CASE WHEN item_datetime > prev_end THEN
item_datetime
ELSE
prev_end
END AS cur_start,
item_duration_in_days
FROM (
SELECT ti.item_id,
ti.item_datetime,
x.cur_end + '1 day'::interval AS prev_end,
item_duration_in_days
FROM x
JOIN ti ON (
ti.item_id != x.item_id
AND ti.item_datetime >= x.item_datetime
)
ORDER BY ti.item_datetime
LIMIT 1
) AS a
) AS a
) SELECT * FROM x;
Result:
item_id | item_datetime | cur_end
---------+---------------------+---------------------
1 | 2016-12-07 12:00:00 | 2016-12-10 12:00:00
2 | 2016-12-11 12:00:00 | 2016-12-15 12:00:00
3 | 2016-12-20 05:00:00 | 2016-12-30 05:00:00
4 | 2017-01-02 14:00:00 | 2017-02-21 14:00:00
5 | 2017-02-22 14:00:00 | 2017-05-22 14:00:00
(5 rows)
For seeing current job :
....
) SELECT * FROM x WHERE item_datetime <= now() AND cur_end >= now();
item_id | item_datetime | cur_end
---------+---------------------+---------------------
4 | 2017-01-02 14:00:00 | 2017-02-21 14:00:00
(1 row)
I'm trying to group a large amount of data into smaller bundles.
Currently the code for my query is as follows
SELECT [DateTime]
,[KW]
FROM [POWER]
WHERE datetime >= '2014-04-14 06:00:00' and datetime < '2014-04-21 06:00:00'
ORDER BY datetime
which gives me
DateTime KW
4/14/2014 6:00:02.0 1947
4/14/2014 6:00:15.0 1946
4/14/2014 6:00:23.0 1947
4/14/2014 6:00:32.0 1011
4/14/2014 6:00:43.0 601
4/14/2014 6:00:52.0 585
4/14/2014 6:01:02.0 582
4/14/2014 6:01:12.0 580
4/14/2014 6:01:21.0 579
4/14/2014 6:01:32.0 579
4/14/2014 6:01:44.0 578
4/14/2014 6:01:53.0 578
4/14/2014 6:02:01.0 577
4/14/2014 6:02:12.0 577
4/14/2014 6:02:22.0 577
4/14/2014 6:02:32.0 576
4/14/2014 6:02:42.0 578
4/14/2014 6:02:52.0 577
4/14/2014 6:03:02.0 577
4/14/2014 6:03:12.0 577
4/14/2014 6:03:22.0 578
.
.
.
.
4/21/2014 5:59:55.0 11
Now there is a reading every 10 seconds from a substation. Now I want to group this data into hourly readings.
Thus 00:00-01:00 = sum([KW]] for where datetime >= '^date^ 00:00:00' and datetime < '^date^ 01:00:00'
I've tried using a convert to change the datetime into date and time field and then only to add all the time fields together with no success.
Can someone please assist me, I'm not sure what is right way of doing this. Thanks
ADDED
Ok so the spilt between Datetime is working nicely, but as if I add a SUM([KW]) function SQL gives an error. And if I include any of the group functions it also nags.
Below is what works, I still need to sum the KW per the grouping of hours.
I've tried using Group By Hour and Group by DATEPART(Hour,[DateTime])
Both didn't work.
SELECT DATEPART(Hour,[DateTime]) Hour
,DATEPART(Day,[DateTime]) Day
,DATEPART(Month,[DateTime]) Month
,([KVAReal])
,([KVAr])
,([KW])
FROM [POWER].[dbo].[IT10t_PAC3200]
WHERE datetime >= '2014-04-14 06:00:00' and datetime < '2014-04-21 06:00:00'
order by datetime
The function convert(varchar(13), getdate(), 120) displays 2014-06-03 16. You can use that to group by the hour:
SELECT convert(varchar(13), [DateTime], 120) as dt
, SUM(KW) as SumKwPerHour
FROM POWER
WHERE [DateTime] >= '2014-04-14 06:00:00'
AND [DateTime]< '2014-04-21 06:00:00'
GROUP BY
convert(varchar(13), [DateTime], 120)
ORDER BY
dt
Ok so here is the solution that worked for me.
Declare #Begin Varchar(60),
#End Varchar(60)
Set #Begin = '2014-05-22 06:00:00'
Set #End = '2014-06-01 06:00:00'
SELECT
ID='10T'
,DATEPART(month,[DateTime]) Month
,DATEPART(day,[DateTime]) Day
,DATEPART(hour,[DateTime]) as Hour
,avg([kw]) hourly_kWh_10T
,avg([KVAr]) hourly_kVarh_10T
,avg([KVAReal]) hourly_kVAh_10T
,(case when(DATEPART(hour,[DateTime]) <=6 and DATEPART(hour,[DateTime]) >18) then 'D' else 'N' end) shift
FROM [POWER]
where DateTime <= #Begin and DateTime > #End
group by DATEPART(Hour,[DateTime]),DATEPART(Day,[DateTime]),DATEPART(Month,[DateTime])
This code gave me this result I was looking for. I also include a variable starting point to reduce the input for different dates. + added a if function (Case when) to determine if the power was consumed during Day or Night shift.
ID Month Day Hour hourly_kWh_10T hourly_kVarh_10T hourly_kVAh_10T shift
10T 5 22 6 269.278551 80.771587 294.038997 D
10T 5 22 7 241.213296 75.991689 268.085872 D
10T 5 22 8 283.925 93.302777 319.211111 D
10T 5 22 9 11.763888 31.313888 36.372222 D
10T 5 22 10 215.947222 69.702777 243.541666 D
10T 5 22 11 1895.816666 396.805555 1948.061111 D
10T 5 22 12 2385.486033 513.589385 2447.648044 D
10T 5 22 13 440.737569 126.209944 475.049723 D
10T 5 22 14 737.158333 183.05 775.763888 D
10T 5 22 15 41.961111 38.086111 67.277777 D
10T 5 22 16 11.875 30.577777 35.736111 D
10T 5 22 17 11.263888 27.563888 32.497222 D
10T 5 22 18 11.104956 26.381924 31.323615 N
10T 5 22 19 11.648936 28.813829 34.015957 N
10T 5 22 20 229.819944 75.227146 268.432132 N
10T 5 22 21 300.597222 92.661111 340.413888 N
10T 5 22 22 494.575 124.358333 527.183333 N
10T 5 22 23 922.244444 190.472222 954.961111 N
10T 5 23 0 2445.908333 516.008333 2507.613888 N
10T 5 23 1 1399.147222 317.380555 1446.786111 N
10T 5 23 2 258.097222 81.641666 288.308333 N
10T 5 23 3 258.480555 79.694444 285.488888 N
10T 5 23 4 262.108333 82.455555 290.261111 N
10T 5 23 5 270.830555 82.030555 297.011111 N
10T 5 23 6 570.836111 151.930555 606.05 D
10T 5 23 7 10.580555 24.488888 29.233333 D
I'm pretty new to this, so forgive if this has been posted (I had no idea what to even search on).
I have 2 tables, Accounts and Usage
AccountID AccountStartDate AccountEndDate
-------------------------------------------
1 12/1/2012 12/1/2013
2 1/1/2013 1/1/2014
UsageId AccountID EstimatedUsage StartDate EndDate
------------------------------------------------------
1 1 10 1/1 1/31
2 1 11 2/1 2/29
3 1 23 3/1 3/31
4 1 23 4/1 4/30
5 1 15 5/1 5/31
6 1 20 6/1 6/30
7 1 15 7/1 7/31
8 1 12 8/1 8/31
9 1 14 9/1 9/30
10 1 21 10/1 10/31
11 1 27 11/1 11/30
12 1 34 12/1 12/31
13 2 13 1/1 1/31
14 2 13 2/1 2/29
15 2 28 3/1 3/31
16 2 29 4/1 4/30
17 2 31 5/1 5/31
18 2 26 6/1 6/30
19 2 43 7/1 7/31
20 2 32 8/1 8/31
21 2 18 9/1 9/30
22 2 20 10/1 10/31
23 2 47 11/1 11/30
24 2 33 12/1 12/31
I'd like to write one query that gives me estimated usage for each month (starting now until the last month that we serve an account) for all accounts being served during that month.
The results would be as follows:
Month-Year Total Est Usage
------------------------------
Oct-12 0 (none being served)
Nov-12 0 (none being served)
Dec-12 34 (only accountid 1 being served)
Jan-13 23 (accountid 1 & 2 being served)
Feb-13 24 (accountid 1 & 2 being served)
Mar-13 51 (accountid 1 & 2 being served)
...
Dec-13 33 (only accountid 2 being served)
Jan-14 0 (none being served)
Feb-14 0 (none being served)
I'm assuming I need to sum and then do a Group By...but not really sure logically how I'd lay this out.
Revised Answer:
I've created a Months table with columns MonthID, Month with values like (201212, 12), (201301, 1), ...
I've also reorganised the usage table to have a month column rather than the start date and end date, as it makes the idea clearer.
See http://sqlfiddle.com/#!3/f57d84/6 for details
The query is now:
Select
m.MonthID,
Sum(u.EstimatedUsage) TotalEstimatedUsage
From
Accounts a
Inner Join
Usage u
On a.AccountID = u.AccountID
Inner Join
Months m
On m.MonthID Between
Year(a.AccountStartDate) * 100 + Month(a.AccountStartDate) And
Year(a.AccountEndDate) * 100 + Month(a.AccountEndDate) And
m.Month = u.Month
Group By
m.MonthID
Order By
1
Previous answer, for reference which assumed usages ranges were full dates rather than just months.
Select
Year(u.StartDate),
Month(u.StartDate),
Sum(Case When a.AccountStartDate <= u.StartDate And a.AccountEndDate >= u.EndDate Then u.EstimatedUsage Else 0 End) TotalEstimatedUsage
From
Accounts a
Inner Join
Usage u
On a.AccountID = u.AccountID
Group By
Year(u.StartDate),
Month(u.StartDate)
Order By
1, 2