So i've got this code:
SELECT to_char(dstamp, 'HH24') as HOUR, SUM(update_qty) total_received
FROM inventory_transaction
WHERE dstamp BETWEEN to_date('28/05/2021 18:00:00', 'dd/mm/yyyy hh24:mi:ss')
AND to_date('29/05/2021 06:00:00', 'dd/mm/yyyy hh24:mi:ss')
AND code = 'Receipt'
GROUP BY to_char(dstamp, 'HH24')
ORDER BY HOUR ASC ;
Im trying to add an additional column to it, thats gonna be showing me total_putaway, so exactly the same query, but code = 'Putaway'. I was trying to do it by the CROSS JOIN, or some subqueries but it doesent really work, as i want to group my records by hour so 12 rows in total.
Use conditional aggregation:
SELECT to_char(dstamp, 'HH24') as HOUR,
SUM(CASE WHEN code = 'Receipt' THEN update_qty END) as total_received,
SUM(CASE WHEN code = 'Putaway' THEN update_qty END) as total_putaway
FROM inventory_transaction
WHERE dstamp BETWEEN to_date('28/05/2021 18:00:00', 'dd/mm/yyyy hh24:mi:ss') AND
to_date('29/05/2021 06:00:00', 'dd/mm/yyyy hh24:mi:ss')
GROUP BY to_char(dstamp, 'HH24')
ORDER BY HOUR ASC
Related
I'm trying to get an account of a daily count from an Oracle query to display count by hours from 14:00 to 19:00. I'm using this query. I want to group the count output.
Select count(*), extract(hour from eventtime) as hours
from TR_MFS_LOADCARRIER
WHERE eventid = 5
And eventtime BETWEEN to_date('05/09/2022 14:00:00', 'dd/mm/yyyy hh24:mi:ss')
and to_date('05/09/2022 19:00:00', 'dd/mm/yyyy hh24:mi:ss')
group by hours
It fails where am I going wrong.
The hours alias is defined in the SELECT clause after the GROUP BY clause is evaluated so it cannot be used in the GROUP BY clause; use EXTRACT(hour from eventtime) instead.
Select count(*),
extract(hour from eventtime) as hours
from TR_MFS_LOADCARRIER
WHERE eventid = 5
And eventtime BETWEEN to_date('05/09/2022 14:00:00', 'dd/mm/yyyy hh24:mi:ss')
and to_date('05/09/2022 19:00:00', 'dd/mm/yyyy hh24:mi:ss')
group by extract(hour from eventtime)
If your eventtime column is a DATE data-type then you cannot EXTRACT the hours field and need to cast it to a TIMESTAMP data-type:
Select count(*),
extract(hour from CAST(eventtime AS TIMESTAMP)) as hours
from TR_MFS_LOADCARRIER
WHERE eventid = 5
And eventtime BETWEEN to_date('05/09/2022 14:00:00', 'dd/mm/yyyy hh24:mi:ss')
and to_date('05/09/2022 19:00:00', 'dd/mm/yyyy hh24:mi:ss')
group by extract(hour from CAST(eventtime AS TIMESTAMP))
fiddle
How can I create an overview without duplicates in column "CHUTE"?
Eg in below result: AX002 = 129
select
COUNT(PPL_SDCC),
substr (PPL_DISCHARGEID,6,5) as CHUTE
from T1LOG.PPL_PIECELOG
where
PPL_DISCHARGEID like 'PS%X%'
and substr (PPL_SDCC,11,1)='1'
and PPL_DISCHARGETIME between TO_DATE ('01/08/2021 10:00:00' , 'DD/MM/YYYY hh24:mi:ss') and TO_DATE ('02/08/2021 09:00:00' , 'DD/MM/YYYY hh24:mi:ss')
group by PPL_DISCHARGEID
order by CHUTE
RESULT:
Rahther than grouping only column PPL_DISCHARGEID, You should actually include the exact calculation in group by clause. Please update your group by clause in your query to -
SELECT COUNT(PPL_SDCC),
SUBSTR(PPL_DISCHARGEID,6,5) as CHUTE
FROM T1LOG.PPL_PIECELOG
WHERE PPL_DISCHARGEID like 'PS%X%'
AND SUBSTR(PPL_SDCC,11,1)='1'
AND PPL_DISCHARGETIME BETWEEN TO_DATE('01/08/2021 10:00:00', 'DD/MM/YYYY hh24:mi:ss') and TO_DATE('02/08/2021 09:00:00', 'DD/MM/YYYY hh24:mi:ss')
GROUP BY SUBSTR(PPL_DISCHARGEID,6,5)
ORDER BY CHUTE;
Given:
INSERT INTO EP_ACCESS (PROFILE_ID, EPISODE_ID, START_TIMESTAMP, DISCONNECT_TIMESTAMP)
VALUES ('1', '1', TO_DATE('2020-01-01 00:00:01','yyyy/mm/dd hh24:mi:ss'), TO_DATE('2020-01-01 00:00:02','yyyy/mm/dd hh24:mi:ss'));
How can I select those who start_timestamp is in 2020?
You would use:
where start_timestamp >= date '2020-01-01' and
start_timestamp < date '2021-01-01'
Of course, you can use a timestamp literal if you prefer typing longer strings.
There are several options.
1 - Use BETWEEN
SELECT *
FROM EP_ACCESS
WHERE START_TIMESTAMP BETWEEN TO_DATE('2020-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_DATE('2020-12-31 23:59:59', 'YYYY-MM-DD HH24:MI:SS')
or
SELECT *
FROM EP_ACCESS
WHERE START_TIMESTAMP BETWEEN DATE '2020-01-01'
AND DATE '2021-01-01' - INTERVAL '1' SECOND
2 - Use EXTRACT
SELECT *
FROM EP_ACCESS
WHERE EXTRACT(YEAR FROM START_TIMESTAMP) = 2020
3 - Use TRUNC
SELECT *
FROM EP_ACCESS
WHERE TRUNC(START_TIMESTAMP, 'YYYY') = DATE '2020-01-01'
Of these options, BETWEEN will probably provide the best performance as the other two require executing a function against the START_TIMESTAMP field in every row in the table.
I have a big table with start times and end times.
It looks like this:
Start_time date-time (format: dd/mm/yyyy hh24:mi:ss),
End_time date-time (format: dd/mm/yyyy hh24:mi:ss)
I might have rows that represent time which is included in other rows
My desiarable result is a table that can solve this containing. I want take any firsy time and see next to him the last end time.
I tried to left join the table with itself on start time between start time and end time if the end of the second is greater than the ending of the first. Then to do a sliding window and take the max end time with sliding window or even with group by.
However, this idea does take in account then I may have, for example:
10:05-10:10
10:07-10:12
10:09-10:15
10:11-10:20
So when I am joined I allegedly get 10:05-10:15 and 10:11-10:20. The row of 10:11 is not joined to the first row because it is not included in that time.
I have here again the same problem I had in the begining.
My desiarable result is actually for the rows above:
10:05-10:20
Seem to be a difficult problem.
I dont know plsql but thought maybe about doing some function that repeat this query until it has nothing to join?
Hope to get ypur help!
Thanks.
I dont know how to format, but you can copy paste in your editor and than format.
Insted of my test data with operator "with" you may use your table.
I suppose you have some sort of ID so i include it:
with test_table as (
select 1 id, to_date('2019-04-07 10:05', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-07 10:08', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 2 id, to_date('2019-04-07 10:07', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-07 10:10', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 3 id, to_date('2019-04-07 10:11', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-07 10:15', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 4 id, to_date('2019-04-07 10:12', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-07 10:20', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 5 id, to_date('2019-04-08 10:05', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-08 10:10', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 6 id, to_date('2019-04-08 10:07', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-08 10:12', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 7 id, to_date('2019-04-08 10:09', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-08 10:15', 'yyyy-mm-dd hh24:mi') end_time from dual union all
select 8 id, to_date('2019-04-08 10:11', 'yyyy-mm-dd hh24:mi') start_time, to_date('2019-04-08 10:20', 'yyyy-mm-dd hh24:mi') end_time from dual
)
select id, to_char(start_time, 'yyyy-mm-dd hh24:mi') start_time, to_char(end_time, 'yyyy-mm-dd hh24:mi') end_time,
(SELECT MAX(to_char(end_time, 'yyyy-mm-dd hh24:mi'))
from test_table t2
connect by nocycle
prior t2.id != t2.id and
PRIOR end_time > start_time and
PRIOR start_time < end_time
start with t2.id = t1.id) max_date
from test_table t1;
I'm working in PLSQL, trying to find event counts for a given week, broken down by day and hour.
Essentially I want to plug in the week and then have the count of events such that I can easily discern counts >1M per day AND 300k per hour (=GOLD), 500k per day and 200k per hour (silver), 100k per day 30k per hour.
I'm trying something like this to get data per day, but I'm not sure how to break it down per day, per hour. Oracle rookie here.
SELECT TO_CHAR(TRUNC(store.transaction_datetime, 'HH'), 'DD-MON-YYYY HH24:MI:SS'), Identifier,
COUNT (*)
FROM data.stats store
WHERE store.transaction_datetime >= '2016-09-04 00:00:00'
AND store.transaction_datetime <= '2016-09-10 23:59:59'
GROUP BY Identifier, TO_CHAR (TRUNC (store.transaction_datetime, 'HH'), 'DD-MON-YYYY HH24:MI:SS')
ORDER BY TO_CHAR (TRUNC (store.transaction_datetime, 'HH'), 'DD-MON-YYYY HH24:MI:SS') ASC;
anything I can throw into excel and perform counts will work for me
any help appreciated
Let me know if it helps:
SELECT T.*,
CASE WHEN CNT_PER_DAY > ... AND CNT_PER_HOUR > ... THEN 'GOLD'
WHEN CNT_PER_DAY > ... AND CNT_PER_HOUR > ... THEN 'SILVER'
WHEN CNT_PER_DAY > ... AND CNT_PER_HOUR > ... 'BRONZE'
ELSE ...
END CATEGORY
FROM (
SELECT DISTINCT
Identifier,
TO_CHAR(TRUNC(store.transaction_datetime, 'HH'), 'DD-MON-YYYY HH24:MI:SS') HOUR_TIME,
COUNT (*) OVER (PARTITION BY Identifier,TRUNC(store.transaction_datetime, 'HH')) CNT_PER_HOUR,
TO_CHAR(TRUNC(store.transaction_datetime, 'DD'), 'DD-MON-YYYY HH24:MI:SS') DAY_TIME,
COUNT (*) OVER (PARTITION BY Identifier,TRUNC(store.transaction_datetime, 'DD')) CNT_PER_DAY,
store.transaction_datetime
FROM data.stats store
WHERE store.transaction_datetime >= '2016-09-04 00:00:00'
AND store.transaction_datetime <= '2016-09-10 23:59:59') T
ORDER BY TO_CHAR (TRUNC (T.transaction_datetime, 'HH'), 'DD-MON-YYYY HH24:MI:SS') ASC;