I have a question about comparing date ranges.
I have a table that hold a state log of a machine. The state can be 0 or 1. In addition I have the date when the machine state change started and when it ended
START_DATE | END_DATE | STATE
For example:
START_DATE | END_DATE | STATE
2019-05-28 07:12:43 2019-05-29 09:12:43 1
2019-05-29 09:12:43 2019-06-01 08:12:43 0
2019-06-11 10:12:43 2019-06-12 16:12:43 1
2019-06-12 16:12:43 2019-06-12 17:12:43 0
I want to make a report that iterate through each WW (work week) and check what average state was on that WW.
My problem is that a state change could have happened on WW22 and ended on WW24 so when I GROUP BY WW I get no values on WW23 because there was no start or end state on WW23. But on WW23 that machine was on state 0 because it started on WW22 and ended on WW24 but through all the this time the state was 0.
It seems that I cant use GROUP BY WW to solve it.
I may need to check START_DATE and END_DATE on cases there is no records on WW23. to add something like:
CASE WHEN WW BETWEEN START_DATE AND END_DATE THEN...
But im not sure how to loop on the WW without using GROUP BY.
I use SQL ORACLE
Thanks.
I hope I understood correctly. It would be good if You showed us your query and tell us how you count average state and where these weeks come from. Anyway here is query which generates all weeks for year 2019 and joins with your log.
select to_char(wsd, 'iw') week, wsd, start_date, end_date, state
from (
select trunc(date '2019-01-01', 'iw') + level * 7 - 7 wsd
from dual
connect by trunc(date '2019-01-01', 'iw') + level * 7 <= date '2020-01-01')
left join log on wsd < end_date and start_date < wsd + 7
Interesting is this range:
week week_start_date log_start log_end state
21 2019-05-20
22 2019-05-27 2019-05-28 07:12:43 2019-05-29 09:12:43 1
22 2019-05-27 2019-05-29 09:12:43 2019-06-01 08:12:43 0
23 2019-06-03
24 2019-06-10 2019-06-11 10:12:43 2019-06-12 16:12:43 1
24 2019-06-10 2019-06-12 16:12:43 2019-06-12 17:12:43 0
25 2019-06-17
I don't know how you count average state for weeks 22 and 24. Maybe it is weighted average of substracted times, maybe somehow other. But it's not important, now you have row for week 23, with missing state.
If this means that previous value is valid for this week use:
nvl(state, lag(state) over (order by wsd))
or
coalesce(state, lag(state) over (order by wsd), 0)
when you want 0 as default value when we also miss previous week(s). If two weeks are missing add ignore nulls to lag.
Then you can group data by weeks and count average values.
dbfiddle demo
Related
I am new to sql and this is my first ever question. I am working with a sample database that I want to extract specific information from to display as a dashboard. The issue is that I can do this partially but I cannot seem to figure it out properly.
``SELECT
S_date as date,
p_time as time,
process_id as process,
sc_gun as scannumb,
sum(line_qty) as linetotal,
sum(area_qty) as areatotal
FROM dbfile6
WHERE
process_id in('0010','0020','0030')
and sc_gun in = ('10','20','30','40','50')
and s_date = curdate() - 1 and p_time between '22:00:00' and '23:59:59'
or s_date = curdate() and p_time between '00:00:00' and '06:00:00'
GROUP BY p_time, s_date, process_id, sc_gun
ORDER BY s_date, process_id
What do I want to display?
I can do partially where I want it to work to yesterdays date (s_date) recurring but I want this to only happen Monday to Friday, skipping the weekend so when we are on Monday, it looks at Fridays data from the database.
I want to show the time as a range, a night range. The range is 20:00:00 - 06:00:00. The range is tricky as it crosses over to the next day, this could work for Monday to Thursday but not Friday as there is no working weekend so what would I do here? In addition to this, I can sum up the values successfully and display it as averages for each process but then once I add the time in, it displays each result individually.
The table below is what it looks like in the database, however as mentioned earlier, the desired result is for each process to have the line_qty and area_qty summed up by time range and a day and night cycle.
s_date
p_time
process_id
sc_gun
line_qty
area_qty
04/05/2022
04:49:52
0010
10
2
12
03/05/2022
11:50:00
0010
10
5
14
03/05/2022
19:50:00
0010
10
7
16
03/05/2022
13:50:00
0020
20
4
6
03/05/2022
19:50:00
0010
10
7
16
The below gives me week numbers where week 1 starts on 1/4/2021
date_trunc('week', transaction_date) as week_number
How can I create a week_number where the week starts on January 1st and counts up 7 days for every week thereafter (for every year)?
And round up/down to 52 weeks at the end of the year?
Code attempted:
This doesn't give me the answer, but I'm thinking something like this might work...
ceil(extract(day from transaction_date)/7) as week_number
Expected Output:
transaction_date
week_number
1/1/2020
1
1/8/2020
2
...
...
12/31/2020
52
1/1/2021
1
1/8/2021
2
...
...
12/27/2021
52
12/28/2021
52
12/29/2021
52
12/30/2021
52
12/31/2021
52
1/1/2022
1
Thanks in advance!
A simple way is to use date arithmetic:
select 1 + (transaction_date - date_trunc('year', transaction_date)) / 7 as year_week
The below gives me week numbers where week 1 starts on 1/4/2021
It is the default behaviour and it is defined that way in ISO.
WEEK_OF_YEAR_POLICY
Type Session — Can be set for Account » User » Session
Description
Specifies how the weeks in a given year are computed.
Values
0: The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.
1: January 1 is included in the first week of the year and December 31 is included in the last week of the year.
Default 0 (i.e. ISO-like behavior)
It could be overrriden on multiple levels. The most granular is on the session level:
ALTER SESSION SET WEEK_OF_YEAR_POLICY = 1;
Then you could use the standard code:
SELECT date_trunc('week', transaction_date) as week_number
FROM ...;
Here is how my current dataset is formatted:
USER START_DATE END_DATE NB_MONTHS
--------------------------------------------
111 2020-01-01 2021-02-01 13
222 2020-05-17 2020-09-28 16
333 2020-02-01 2020-03-01 0
Each of my users currently have a start date and an end date for an action they've completed.
I wish to find the time duration of their action in MONTHS (as defined by the NB_MONTHS_ flag).
Here is my current query to get this NB_MONTHS flag:
SELECT
USERS,
FLOOR((END_DATE)-(START_DATE))/30.00 as NB_MONTHS
FROM
TABLE1;
I am currently rounding down this flag as that is what makes most sense for my analysis.
Here is where I get an issue:
My user 333 who has technically taking 1 month to complete the action (duration of February) is currently beeing flagged as "0 months" because February has 28 days (which doesnt work with my query).
Anyone know how I can avoid this problem?
Does datediff() do what you want?
SELECT USERS,
DATEDIFF(MONTH, START_DATE, END_DATE) as NB_MONTHS
FROM TABLE1;
I have a table where our product records its activity log. The product starts working at 23:00 every day and usually works one or two hours. This means that once a batch started at 23:00, it finishes about 1:00am next day.
Now, I need to take statistics on how many posts are registered per batch but cannot figure out a script that would allow me achiving this. So far I have following SQL code:
SELECT COUNT(*), DATEPART(DAY,registrationtime),DATEPART(HOUR,registrationtime)
FROM RegistrationMessageLogEntry
WHERE registrationtime > '2014-09-01 20:00'
GROUP BY DATEPART(DAY, registrationtime), DATEPART(HOUR,registrationtime)
ORDER BY DATEPART(DAY, registrationtime), DATEPART(HOUR,registrationtime)
which results in following
count day hour
....
1189 9 23
8611 10 0
2754 10 23
6462 11 0
1885 11 23
I.e. I want the number for 9th 23:00 grouped with the number for 10th 00:00, 10th 23:00 with 11th 00:00 and so on. How could I do it?
You can do it very easily. Use DATEADD to add an hour to the original registrationtime. If you do so, all the registrationtimes will be moved to the same day, and you can simply group by the day part.
You could also do it in a more complicated way using CASE WHEN, but it's overkill on the view of this easy solution.
I had to do something similar a few days ago. I had fixed timespans for work shifts to group by where one of them could start on one day at 10pm and end the next morning at 6am.
What I did was:
Define a "shift date", which was simply the day with zero timestamp when the shift started for every entry in the table. I was able to do so by checking whether the timestamp of the entry was between 0am and 6am. In that case I took only the date of this DATEADD(dd, -1, entryDate), which returned the previous day for all entries between 0am and 6am.
I also added an ID for the shift. 0 for the first one (6am to 2pm), 1 for the second one (2pm to 10pm) and 3 for the last one (10pm to 6am).
I was then able to group over the shift date and shift IDs.
Example:
Consider the following source entries:
Timestamp SomeData
=============================
2014-09-01 06:01:00 5
2014-09-01 14:01:00 6
2014-09-02 02:00:00 7
Step one extended the table as follows:
Timestamp SomeData ShiftDay
====================================================
2014-09-01 06:01:00 5 2014-09-01 00:00:00
2014-09-01 14:01:00 6 2014-09-01 00:00:00
2014-09-02 02:00:00 7 2014-09-01 00:00:00
Step two extended the table as follows:
Timestamp SomeData ShiftDay ShiftID
==============================================================
2014-09-01 06:01:00 5 2014-09-01 00:00:00 0
2014-09-01 14:01:00 6 2014-09-01 00:00:00 1
2014-09-02 02:00:00 7 2014-09-01 00:00:00 2
If you add one hour to registrationtime, you will be able to group by the date part:
GROUP BY
CAST(DATEADD(HOUR, 1, registrationtime) AS date)
If the starting hour must be reflected accurately in the output (as 9, 23, 10, 23 rather than as 10, 0, 11, 0), you could obtain it as MIN(registrationtime) in the SELECT clause:
SELECT
count = COUNT(*),
day = DATEPART(DAY, MIN(registrationtime)),
hour = DATEPART(HOUR, MIN(registrationtime))
Finally, in case you are not aware, you can reference columns by their aliases in ORDER BY:
ORDER BY
day,
hour
just so that you do not have to repeat the expressions.
The below query will give you what you are expecting..
;WITH CTE AS
(
SELECT COUNT(*) Count, DATEPART(DAY,registrationtime) Day,DATEPART(HOUR,registrationtime) Hour,
RANK() over (partition by DATEPART(HOUR,registrationtime) order by DATEPART(DAY,registrationtime),DATEPART(HOUR,registrationtime)) Batch_ID
FROM RegistrationMessageLogEntry
WHERE registrationtime > '2014-09-01 20:00'
GROUP BY DATEPART(DAY, registrationtime), DATEPART(HOUR,registrationtime)
)
SELECT SUM(COUNT) Count,Batch_ID
FROM CTE
GROUP BY Batch_ID
ORDER BY Batch_ID
You can write a CASE statement as below
CASE WHEN DATEPART(HOUR,registrationtime) = 23
THEN DATEPART(DAY,registrationtime)+1
END,
CASE WHEN DATEPART(HOUR,registrationtime) = 23
THEN 0
END
My initial answer to this problem has been to script it. Instead of using SQL, I've dipped into Python and normalised them. I am curious whether anyone can come up with a solution using SQL though.
If a date occurs outside of business hours, I want to normalise the date to the next working day. I'll keep this really simple and say that business hours is 9am to 6pm Monday to Friday. Anything outside of those hours is outside of business hours.
What should happen the dates is that they are changed so that 2pm on Saturday becomes 9am on Monday morning (the first legitimate time in the business week). 7pm on a Wednesday becomes 9am Thursday morning. etc. etc. Let's ignore holidays.
Sample data:
mysql> select mydate from mytable ORDER by mydate;
+---------------------+
| mydate |
+---------------------+
| 2009-09-13 17:03:09 |
| 2009-09-14 09:45:49 |
| 2009-09-15 09:57:28 |
| 2009-09-16 21:55:01 |
+---------------------+
4 rows in set (0.00 sec)
The first date is a Sunday so it should be normalised to 2009-09-14 09:00:00
The second date is fine, it's at 9am on a Monday.
The third date is fine, it's at 9am on a Tuesday.
The fourth date is at 9pm (outside of our 9am to 6pm business hours) on a Wednesday and should be transformed to 9am Thursday morning.
I think you're better off with your Python solution ... but I like challenges :)
select mydate
, case dayadjust
-- BUG
-- when 0 then mydate
-- BUG
when 0 then case
when hour(mydate)<9
then date_add(from_days(to_days(mydate)),
INTERVAL 9 HOUR)
else mydate
end
-- BUG SQUASHED
else date_add(from_days(to_days(mydate) + dayadjust),
INTERVAL 9 HOUR)
end as mynewdate
from (
select mydate
, case
when addday>=moreday then addday
else moreday
end as dayadjust
from (
select mydate
, weekday(mydate) as w
, hour(mydate) as h
, case weekday(mydate)
when 6 then 1
when 5 then 2
when 4 then
case
when hour(mydate) >= 18 then 3
else 0
end
else 0
end as addday
, case when hour(mydate)>=18 then 1 else 0 end as moreday
from mytable
order by mydate
) alias1
) alias2
Tested on MySQL
$ mysql tmp < phil.sql
mydate mynewdate
2009-09-12 17:03:09 2009-09-14 09:00:00
2009-09-12 21:03:09 2009-09-14 09:00:00
2009-09-13 17:03:09 2009-09-14 09:00:00
2009-09-14 09:45:49 2009-09-14 09:45:49
2009-09-15 09:57:28 2009-09-15 09:57:28
2009-09-16 21:55:01 2009-09-17 09:00:00
2009-09-17 11:03:09 2009-09-17 11:03:09
2009-09-17 22:03:09 2009-09-18 09:00:00
2009-09-18 12:03:09 2009-09-18 12:03:09
2009-09-18 19:03:09 2009-09-21 09:00:00
2009-09-19 06:03:09 2009-09-21 09:00:00
2009-09-19 16:03:09 2009-09-21 09:00:00
2009-09-19 19:03:09 2009-09-21 09:00:00
Not sure why you want to do this, but if it needs to always be true of all data in your database, you need a trigger. I would set up a table to pull from that specifies the business hours and you can use that table to determine the next valid business hour day and time. (I might even consider making a table that tells you exactly what the next business day and hour is for each possibility, it's not like this changes a lot, might have to be updated once a year if you change holidays for the next year or if you change the overall business hours. By precalulating, you can probably save time in processing this.). I would also conmtinue to use your script becasue it's better to fix data before it gets entered, but you need the trigger to ensure that data from any source (and sooner or later there will be changes form sources other than your application) meets the data integrity rules.
I don't think you can do it in one query, but you can try this:
-- Mon-Thu, after 17:00
-- Set date = next day, 9:00
UPDATE
myTable
SET
mydate = DATE_ADD(DATE_ADD(DATE(date), INTERVAL 1 DAY), INTERVAL 9 HOUR)
WHERE
TIME(mydate) >= 17
AND DAYOFWEEK(mydate) IN (1,2,3,4)
-- Mon-Fri, before 9:00
-- Set date = the same day, 9:00
UPDATE
myTable
SET
mydate = DATE_ADD(DATE(date), INTERVAL 9 HOUR)
WHERE
TIME(mydate) < 9
AND DAYOFWEEK(mydate) IN (1,2,3,4,5)
-- Fri, after 17:00, Sat, Sun
-- Set date = monday, 9.00
UPDATE
myTable
SET
mydate = DATE_ADD(DATE_ADD(DATE(date), INTERVAL 3 DAY), INTERVAL 9 HOUR)
WHERE
(TIME(mydate) >= 17
AND DAYOFWEEK(mydate) = 5)
OR DAYOFWEEK(mydate) IN (6,7)