Please go through this edited table.
You can assume order_header_key as order no.
I want to get the list of order nos whose current status is 3 and previous status was 2, and the status date(status3)-status date(2) <=3 days for that order
in the following table
for order no-1 'date(status 3)' - 'date(status 2) = 20 OCT - 19 OCT
which is less than 3 days--> So valid order
but for order no 3 'date(status 3)' - 'date(status 2)' = 30 OCT - 24 OCT
which is more than 3 days so invalid order
order no 2 in invalid since the statuses are 3 and 1 , 2 is missing
Assuming an order can't have more than one entry per order_no/status combination, you could join two subqueries:
SELECT s3.order_no
FROM (SELECT *
FROM orders
WHERE status = 3) s3
JOIN (SELECT *
FROM orders
WHERE status = 2) s2 ON s3.order_no = s2.order_no AND
s3.status_date - s3.status_date <= 3
Use lag():
select o.*
from (select o.*,
lag(o.status_date) over (partition by o.order_no order by o.status_date ) as prev_sd,
lag(o.status) over (partition by o.order_no order by o.status_date) as prev_status
from orders o
) o
where prev_status = 2 and status = 3 and
(status_date - prev_sd) <= 3;
Analytic functions (lag() in this case) allow you to avoid joins and/or subqueries, and may (and often will) be much faster.
with
-- begin test data; not part of the solution
orders ( order_no, status, status_date ) as (
select 1, 1, to_date('18-OCT-16', 'DD-MON-YY')from dual union all
select 1, 2, to_date('19-OCT-16', 'DD-MON-YY')from dual union all
select 1, 3, to_date('20-OCT-16', 'DD-MON-YY')from dual union all
select 1, 1, to_date('20-OCT-16', 'DD-MON-YY')from dual union all
select 1, 3, to_date('23-OCT-16', 'DD-MON-YY')from dual union all
select 1, 2, to_date('24-OCT-16', 'DD-MON-YY')from dual union all
select 1, 1, to_date('30-OCT-16', 'DD-MON-YY')from dual
),
-- end test data; solution is the word "with" from above, plus the query below
prep ( order_no, status, status_date, prev_status, prev_status_date) as (
select order_no, status, status_date,
lag(status) over (partition by order_no order by status_date),
lag(status_date) over (partition by order_no order by status_date)
from orders
)
select order_no
from prep
where status = 3 and prev_status = 2 and prev_status_date - status_date <= 3
;
ORDER_NO
--------
1
Related
I have dates record
with DateTable (dateItem) as
(
select '2022-07-03' union all
select '2022-07-05' union all
select '2022-07-04' union all
select '2022-07-09' union all
select '2022-07-12' union all
select '2022-07-13' union all
select '2022-07-18'
)
select dateItem
from DateTable
order by 1 asc
I want to get ranges of dates between this record like this
with DateTableRange (dateItemStart, dateItemend) as
(
select '2022-07-03','2022-07-05' union all
select '2022-07-09','2022-07-09' union all
select '2022-07-12','2022-07-13' union all
select '2022-07-18','2022-07-18'
)
select dateItemStart, dateItemend
from DateTableRange
I am able to do it in SQL with looping using while or looping by getting first one and check the next dates and if they are 1 plus then I add it in enddate and do the same in loop
But I don't know what the best or optimized way is, as there were lots of looping and temp tables involve
Edited :
as in data we have 3,4,5 and 6,7,8 is missing so range is 3-5
9 exist and 10 is missing so range is 9-9
so ranges is purely depend on the consecutive data in datetable
Any suggestion will be appreciated
With some additional clarity this requires a gaps-and-islands approach to first identify adjacent rows as groups, from which you can then use a window to identify the first and last value of each group.
I'm sure this could be refined further but should give your desired results:
with DateTable (dateItem) as
(
select '2022-07-03' union all
select '2022-07-05' union all
select '2022-07-04' union all
select '2022-07-09' union all
select '2022-07-12' union all
select '2022-07-13' union all
select '2022-07-18'
), valid as (
select *,
case when exists (
select * from DateTable d2 where Abs(DateDiff(day, d.dateitem, d2.dateitem)) = 1
) then 1 else 0 end v
from DateTable d
), grp as (
select *,
Row_Number() over(order by dateitem) - Row_Number()
over (partition by v order by dateitem) g
from Valid v
)
select distinct
Iif(v = 0, dateitem, First_Value(dateitem) over(partition by g order by dateitem)) DateItemStart,
Iif(v = 0, dateitem, First_Value(dateitem) over(partition by g order by dateitem desc)) DateItemEnd
from grp
order by dateItemStart;
See Demo Fiddle
After clarification, this is definitely a 'gaps and islands' problem.
The solution can be like this
WITH DateTable(dateItem) AS
(
SELECT * FROM (
VALUES
('2022-07-03'),
('2022-07-05'),
('2022-07-04'),
('2022-07-09'),
('2022-07-12'),
('2022-07-13'),
('2022-07-18')
) t(v)
)
SELECT
MIN(dateItem) AS range_from,
MAX(dateItem) AS range_to
FROM (
SELECT
*,
SUM(CASE WHEN DATEADD(day, 1, prev_dateItem) >= dateItem THEN 0 ELSE 1 END) OVER (ORDER BY rn) AS range_id
FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY dateItem) AS rn,
CAST(dateItem AS date) AS dateItem,
CAST(LAG(dateItem) OVER (ORDER BY dateItem) AS date) AS prev_dateItem
FROM DateTable
) groups
) islands
GROUP BY range_id
You can check a working demo
I have a transaction table that stores amount paid(+amount) and corrected (-ve amount). I am looking for a query that would ignore a positive and a negative matching value of the amount for a date and post the sum of remaining number of transactions ignoring the 2 .
Id Dept Date Amount
1 A 21-Apr-21 1100
1 A 21-Apr-21 1100
1 A 21-Apr-21 -1100
1 A 07-Apr-21 1100
1 A 03-Feb-21 100
1 A 12-Jan-21 500
The sql query should ignore Rows 2 and 3 as the amount was corrected and should not be counted as a transaction.
o/p should be
Id Dept sum(Amount) count(transaction)
1 A 2800 4
If I got you well, you can use below solution for that purpose.
I first ranked all the occurrences of the same amount value, before I grouped them in order to make oracle ignore all matching positive and negative values.
with YourSample (Id, Dept, Date#, Amount) as (
select 1, 'A', to_date('21-Apr-21', 'dd-Mon-RR', 'nls_date_language=english'), 1100 from dual union all
select 1, 'A', to_date('21-Apr-21', 'dd-Mon-RR', 'nls_date_language=english'), 1100 from dual union all
select 1, 'A', to_date('21-Apr-21', 'dd-Mon-RR', 'nls_date_language=english'), -1100 from dual union all
select 1, 'A', to_date('07-Apr-21', 'dd-Mon-RR', 'nls_date_language=english'), 1100 from dual union all
select 1, 'A', to_date('03-Feb-21', 'dd-Mon-RR', 'nls_date_language=english'), 100 from dual union all
select 1, 'A', to_date('12-Jan-21', 'dd-Mon-RR', 'nls_date_language=english'), 500 from dual
)
, ranked_rws as (
select Id, Dept, Date#
, abs(Amount)Amount
, sign(AMOUNT) row_sign
, row_number() OVER (PARTITION BY Id, Dept, Amount order by date#, rownum) rn
from YourSample t
)
, ingored_matched_pos_neg_values as (
select ID, DEPT, sum(row_sign) * AMOUNT AMOUNT/*, sum(row_sign)*/
from ranked_rws
group by ID, DEPT, AMOUNT, RN
having sum(row_sign) != 0 /* this line filters out all matching positive
and negatives values (equality in terms of occurrences)*/
)
select ID, DEPT, sum(AMOUNT) sum, count(*) transactions
from ingored_matched_pos_neg_values
group by ID, DEPT
;
demo
Maybe some idea like this could work.
SELECT Id, Dept, Date, Amount, COUNT(*) AS RecordCount
INTO #temptable
FROM table GROUP BY ...
SELECT
t1.Id
,t1.Dept
,t1.Date
,(t1.RecordCount - COALESCE(t2.RecordCount, 0)) * t1.Amount
,t1.RecordCount - COALESCE(t2.RecordCount, 0)
FROM #temptable t1
LEFT JOIN #temptable t2 ON
t1.Id = t2.Id
AND t1.Dept = t2.Dept
AND t1.Date = t2.Date
AND (t1.Amount * -1) = t2.Amount
I am trying to display last 60 minutes statistics on pentaho dashboard (oracle 11g query passing to pentaho ).
I have column (counter_buff) in my table with 1000 counter positions with sample data shown below
counter_buff= '0,8,9,3,2,6,....15,62' up to 1000 comma seperated values
i am trying to fetch each comma separated values from table as per fixed positions provided and sum them, so the problem is if i use multiple positions query gets bigger & slower, slower query result in delayed statistics on dashboard.
I created this sample query & result:
Query:
the Numbers showing in {} is a counter positions({16},{24}..), this positions will be user defined. query also using 6 union all same like this.
select * from
((SELECT MIN(to_char(TIMESTAMP,'HH24:MI:SS')) as TS,
'SELL' as "STATUS",
SUM((regexp_substr(counter_buff,'(.*?,){16}(.*?),', 1, 1,'', 2)) +
(regexp_substr(counter_buff,'(.*?,){24}(.*?),', 1, 1,'', 2)) +
(regexp_substr(counter_buff,'(.*?,){32}(.*?),', 1, 1,'', 2)) ......+
(regexp_substr(counter_buff,'(.*?,){168}(.*?),', 1, 1,'', 2))) AS "COUNTS"
FROM (SELECT * FROM SHOPS
order by TO_CHAR("TIMESTAMP",'YYYY-MM-DD HH24:MI:SS') desc) "SHOPS"
where TOY_NAME = 'LION'
and rownum <=60
GROUP BY TO_CHAR("TIMESTAMP",'HH24:MI'))
UNION ALL
(SELECT MIN(to_char(TIMESTAMP,'HH24:MI:SS')) as TS,
'RETURNED' as "STATUS",
SUM((regexp_substr(counter_buff,'(.*?,){17}(.*?),', 1, 1,'', 2)) +
(regexp_substr(counter_buff,'(.*?,){25}(.*?),', 1, 1,'', 2)) ..... +
(regexp_substr(counter_buff,'(.*?,){153}(.*?),', 1, 1,'', 2)) +
(regexp_substr(counter_buff,'(.*?,){161}(.*?),', 1, 1,'', 2)) +
(regexp_substr(counter_buff,'(.*?,){169}(.*?),', 1, 1,'', 2))) AS "COUNTS"
FROM (SELECT * FROM SHOPS
order by TO_CHAR("TIMESTAMP",'YYYY-MM-DD HH24:MI:SS') desc) "SHOPS"
where TOY_NAME = 'LION'
and rownum <=60
GROUP BY TO_CHAR("TIMESTAMP",'HH24:MI')) )
order by TS desc,STATUS desc
result:
this is just some rows of result, result will be as per query rowid (to reduce space i pasted half result only , but i am using data of 60 last minutes)
TS STATUS COUNTS
10:20:01 SELL 6
10:21:01 SELL 9
10:22:01 SELL 8
10:23:01 SELL 3
10:20:01 RETURNED 1
10:21:01 RETURNED 6
10:22:01 RETURNED 7
10:23:01 RETURNED 2
I am able to achieve my desired output, But i want faster & smaller query option.
I am new to oracle query
You should filter data as much as possible at first, then make rest of job. Also union is not needed, you can do everything in one grouping, then only unpivot result if needed.
Below two queries, which should be useful. In first you have to write regexp_substr as many times as needed:
/* sample data
with shops(toy_name, time_stamp, counter_buff) as (
select 'LION', timestamp '2018-07-27 13:15:27', '0,8,9,3,2,6,15,62' from dual union all
select 'BEAR', timestamp '2018-07-27 13:44:06', '7,3,9,3,3,6,11,39' from dual union all
select 'LION', timestamp '2018-07-27 16:03:09', '7,3,151,44,3,6,11,39' from dual union all
select 'LION', timestamp '2018-07-27 16:03:49', '7,3,11,4,3,6,11,39' from dual )
-- end of data */
select to_char(time_stamp, 'hh24:mi') ts,
sum(regexp_substr(counter_buff,'(.*?,){2}(.*?),', 1, 1,'', 2) +
regexp_substr(counter_buff,'(.*?,){5}(.*?),', 1, 1,'', 2)) sell,
sum(regexp_substr(counter_buff,'(.*?,){3}(.*?),', 1, 1,'', 2) +
regexp_substr(counter_buff,'(.*?,){6}(.*?),', 1, 1,'', 2)) retu
from (select time_stamp, counter_buff, row_number() over (order by time_stamp desc) rn
from shops where toy_name = 'LION') t
where rn <= 60
group by to_char(time_stamp, 'hh24:mi')
In second I join two tables of predefined numbers with your data. These are the "user defined positions" used next as parameter for regexp_substr.
with
/* sample data
shops(toy_name, time_stamp, counter_buff) as (
select 'LION', timestamp '2018-07-27 13:15:27', '0,8,9,3,2,6,15,62' from dual union all
select 'BEAR', timestamp '2018-07-27 13:44:06', '7,3,9,3,3,6,11,39' from dual union all
select 'LION', timestamp '2018-07-27 16:03:09', '7,3,151,44,3,6,11,39' from dual union all
select 'LION', timestamp '2018-07-27 16:03:49', '7,3,11,4,3,6,11,39' from dual ),
*/ -- end of sample data
sell as (select rownum rn, column_value cs from table(sys.odcinumberlist(2, 5)) ),
retu as (select rownum rn, column_value cr from table(sys.odcinumberlist(3, 6)) )
select *
from (
select sum(regexp_substr(counter_buff,'(.*?,){'||cs||'}(.*?),', 1, 1,'', 2)) sell,
sum(regexp_substr(counter_buff,'(.*?,){'||cr||'}(.*?),', 1, 1,'', 2)) retu, ts
from (select to_char(time_stamp, 'HH24:MI') ts, counter_buff
from (select * from shops where toy_name = 'LION' order by time_stamp desc)
where rownum <= 60)
cross join sell join retu using (rn) group by ts)
unpivot (val for status in (sell, retu))
In both queries I assumed that sell is in positions (2, 5), returned in positions (3, 6). Also try row_number() against rownum and check which is faster for you. In both cases data is hit only once, this should speed up calculations.
I have a table which shows good, bad and other status for a Device everyday. I want to display a row per device with today's status and previous best status('Good' if anytime good in the time span, otherwise the previous day status). I am using join and query is as shared below.
SELECT t1.devid,
t1.status AS Today_status,
t2.status AS yest_status,
t2.runtime AS yest_runtime
FROM devtable t1
INNER JOIN devtable t2
ON t1.devid = t2.devid
AND t1.RUNTIME = '17-jul-2018'
AND t2.runtime > '30-jun-2018'
ORDER BY t1.devID, (CASE WHEN t2.status LIKE 'G%' THEN 0 END), t2.runtime;
Now I am not able to group it to a single record per device(getting many records per device). Can you suggest a solution on this?
This would be easier to interpret with sample data and results, but it sounds like you want something like:
select devid, runtime, status, prev_status,
coalesce(good_status, prev_status) as best_status
from (
select devid, runtime, status,
lag(status) over (partition by devid order by runtime) as prev_status,
max(case when status = 'Good' then status end) over (partition by devid) as good_status
from (
select devid, runtime, status
from devtable
where runtime > date '2018-06-30'
)
)
where runtime = date '2018-07-17';
The innermost query restricts the date range; if you need an upper bound on that (i.e. it isn't today as in your example) then include that as another filter.
The next layer out uses lag() and max() analytic functions to find the previous status, and any 'Good' status (via a case expression), for each ID.
The outer query then filters to only show the target end date, and uses coalesce() to show 'Good' if that existed, or the previous status if not.
Demo with some made-up sample data in a CTE:
with devtable (devid, runtime, status) as (
select 1, date '2018-06-30', 'Good' from dual -- should be ignored
union all select 1, date '2018-07-01', 'a' from dual
union all select 1, date '2018-07-16', 'b' from dual
union all select 1, date '2018-07-17', 'c' from dual
union all select 2, date '2018-07-01', 'Good' from dual
union all select 2, date '2018-07-16', 'e' from dual
union all select 2, date '2018-07-17', 'f' from dual
union all select 3, date '2018-07-01', 'g' from dual
union all select 3, date '2018-07-16', 'Good' from dual
union all select 3, date '2018-07-17', 'i' from dual
union all select 4, date '2018-07-01', 'j' from dual
union all select 4, date '2018-07-16', 'k' from dual
union all select 4, date '2018-07-17', 'Good' from dual
)
select devid, runtime, status, prev_status,
coalesce(good_status, prev_status) as best_status
from (
select devid, runtime, status,
lag(status) over (partition by devid order by runtime) as prev_status,
max(case when status = 'Good' then status end) over (partition by devid) as good_status
from (
select devid, runtime, status
from devtable
where runtime > date '2018-06-30'
)
)
where runtime = date '2018-07-17';
DEVID RUNTIME STAT PREV BEST
---------- ---------- ---- ---- ----
1 2018-07-17 c b b
2 2018-07-17 f e Good
3 2018-07-17 i Good Good
4 2018-07-17 Good k Good
You could remove the innermost query by moving that filter into the case expression:
select devid, runtime, status, prev_status,
coalesce(good_status, prev_status) as best_status
from (
select devid, runtime, status,
lag(status) over (partition by devid order by runtime) as prev_status,
max(case when runtime > date '2018-06-30' and status = 'Good' then status end)
over (partition by devid) as good_status
from devtable
)
where runtime = date '2018-07-17';
but that would probably do quite a lot more work as it would examine and calculate a lot of data you don't care about.
Analytic functions should do what you want. It is unclear what your results should look like, but this gathers the information you need:
SELECT d.*
FROM (SELECT d.*,
LAG(d.status) OVER (PARTITION BY d.devid ORDER BY d.runtime) as prev_status,
LAG(d.runtime) OVER (PARTITION BY d.devid ORDER BY d.runtime) as prev_runtime,
ROW_NUMBER() OVER (PARTITION BY d.devid ORDER BY d.runtime) as seqnum,
SUM(CASE WHEN status = 'GOOD' THEN 1 ELSE 0 END) OVER (PARTITION BY d.devid) as num_good
FROM devtable d
WHERE d.runtime = DATE '2018-07-17' AND
d.runtime > DATE '2018-06-2018'
) d
WHERE seqnum = 1;
I have the following script that shows the last 11 months from the sysdate...
select * from (
select level-1 as num,
to_char(add_months(trunc(sysdate,'MM'),- (level-1)),'MM')||'-'||to_char(add_months(trunc(sysdate,'MM'),- (level-1)),'YYYY') as dte
from dual
connect by level <= 12
)
pivot (
max(dte) as "DATE"
for num in (0 as "CURRENT", 1 as "1", 2 as "2", 3 as "3", 4 as "4", 5 as "5",6 as "6",7 as "7",8 as "8",9 as "9",10 as "10", 11 as "11"))
I want to create a table that shows delivery qty where the delivery date ('MM-YYYY') equals the date generated from the above script.
I get the delivery qty and delivery date from the following
select dp.catnr,
nvl(sum(dp.del_qty),0) del_qty
from bds_dhead#sid_to_cdsuk dh,
bds_dline#sid_to_cdsuk dp
where dp.dhead_no = dh.dhead_no
and dh.d_status = '9'
and dp.article_no = 9||'2EDVD0007'
and to_char(trunc(dh.actshpdate),'MM')||'-'||to_char(trunc(dh.actshpdate),'YYYY') = = --this is where I would like to match the result of the above script
group by dp.catnr
The results would look something like...
Any ideas would be much appreciated.
Thanks, SMORF
with date_series as (
select add_months(trunc(sysdate,'MM'), 1 - lvl) start_date,
add_months(trunc(sysdate,'MM'), 2-lvl) - 1/24/60/60 end_date
from (select level lvl from dual connect by level <= 12)
),
your_table as (
select 'catnr1' catnr, 100500 del_qty, sysdate actshpdate from dual
union all select 'catnr1' catnr, 10 del_qty, sysdate-30 actshpdate from dual
union all select 'catnr2' catnr, 15 del_qty, sysdate-60 actshpdate from dual
),
subquery as (
select to_char(ds.start_date, 'MM-YYYY') dte, t.catnr, sum(nvl(t.del_qty, 0)) del_qty
from date_series ds left join your_table t
on (t.actshpdate between ds.start_date and ds.end_date)
group by to_char(ds.start_date, 'MM-YYYY'), t.catnr
)
select * from subquery pivot (sum(del_qty) s for dte in ('11-2013' d1, '12-2013' d2, '08-2014' d10, '09-2014' d11, '10-2014' d12))
where catnr is not null;