Oracle SQL - Add numbers separated by delimiter, columnwise - sql

I have multiple rows with values like
a_b_c_d_e_f and x_y_z_m_n_o
and I need a SQL query with a result like a+x_b+y_c+z_d+m.......
Sample data as requested
What I am willing to do is aggregate it at Datetime..aggregating Total is simple, but how can I do that for the last column, thanks.
Expected Result

Here's one option; read comments within code. I didn't feel like typing too much so two dates will have to do.
Sample data (you already have that & don't type it. Code you need begins at line #10):
SQL> with
2 -- sample data
3 test (datum, total, col) as
4 (select date '2020-07-20', 100, '10,0,20,30,0' from dual union all
5 select date '2020-07-20', 150, '15,3,40,30,2' from dual union all
6 --
7 select date '2020-07-19', 200, '50,6,50,30,8' from dual union all
8 select date '2020-07-19', 300, '20,1,40,10,2' from dual
9 ),
Split CSV values into rows. Note the RB value which will help us sum matching values
10 -- split comma-separated values into rows
11 temp as
12 (select
13 datum,
14 total,
15 to_number(regexp_substr(col, '\d+', 1, column_value)) val,
16 column_value rb
17 from test cross join
18 table(cast(multiset(select level from dual
19 connect by level <= regexp_count(col, ',') + 1
20 ) as sys.odcinumberlist))
21 ),
Computing summaries is simple; nothing special about it. We'll keep the RB value as it'll be needed in the last step:
22 -- compute summaries
23 summary as
24 (select datum,
25 sum(total) total,
26 sum(val) sumval,
27 rb
28 from temp
29 group by datum, rb
30 )
The last step. Using LISTAGG, aggregate comma-separated values back, but this time added to each other:
31 -- final result
32 select datum,
33 total,
34 listagg(sumval, ',') within group (order by rb) new_col
35 from summary
36 group by datum, total
37 order by datum desc, total;
DATUM TOTAL NEW_COL
------------------- ---------- --------------------
20.07.2020 00:00:00 250 25,3,60,60,2
19.07.2020 00:00:00 500 70,7,90,40,10
SQL>

Related

Oracle: Get latest value from a group by query, among other aggregations

I have a group by query returning avg and max from a set of records. I need to return a new column with the latest value of a column("records") based on another column ("dates").
This query
with x as (select 'A' process, 10 records, sysdate-5 dates from dual union all
select 'A' process, 20 records, sysdate-4 dates from dual union all
select 'A' process, 30 records, sysdate-3 dates from dual union all
select 'B' process, 25 records, sysdate-2 dates from dual union all
select 'B' process, 15 records, sysdate-1 dates from dual)
select process,
avg(records) avgu,
max(records) maxu
from x
group by process
order by 1
returns:
Process
AVG.
MAX.
A.
20
30.
B
20
25.
I need a new column (LATEST) with latest value of records based on dates, keeping the old columns too:
Process
MAX.
LATEST.
A.
30
30.
B
25
15.
I'm playing with some window functions like RANK OVER PARTITION but I can't get the desired outcome in a single query.
Thank you in advance for any idea.
Here's one option:
Sample data:
SQL> with x as (
2 select 'A' process,10 records,sysdate-5 dates from dual union all
3 select 'A',20,sysdate-4 from dual union all
4 select 'A',30,sysdate-3 from dual union all
5 select 'B',25,sysdate-2 from dual union all
6 select 'B',15,sysdate-1 from dual),
Query begins here: first find the latest value per each process, then - in the final query - aggregate required values.
7 temp as
8 (select process,
9 records,
10 dates,
11 first_value(records) over (partition by process order by dates desc) latest
12 from x
13 )
14 select process,
15 avg(records) avgu,
16 max(records) maxu,
17 max(latest) latest
18 from temp
19 group by process
20 order by 1;
P AVGU MAXU LATEST
- ---------- ---------- ----------
A 20 30 30
B 20 25 15
SQL>

Generate a range of records depending on from-to dates

I have a table of records like this:
Item
From
To
A
2018-01-03
2018-03-16
B
2021-05-25
2021-11-10
The output of select should look like:
Item
Month
Year
A
01
2018
A
02
2018
A
03
2018
B
05
2021
B
06
2021
B
07
2021
B
08
2021
Also the range should not exceed the current month. In example above we are asuming current day is 2021-08-01.
I am trying to do something similar to THIS with CONNECT BY LEVEL but as soon as I also select my table next to dual and try to order the records the selection never completes. I also have to join few other tables to the selection but I don't think that would make a difference.
I would very much appreciate your help.
Row generator it is, but not as you did it; most probably you're missing lines #11 - 16 in my query (or their alternative).
SQL> with test (item, date_from, date_to) as
2 -- sample data
3 (select 'A', date '2018-01-03', date '2018-03-16' from dual union all
4 select 'B', date '2021-05-25', date '2021-11-10' from dual
5 )
6 -- query that returns desired result
7 select item,
8 extract(month from (add_months(date_from, column_value - 1))) month,
9 extract(year from (add_months(date_from, column_value - 1))) year
10 from test cross join
11 table(cast(multiset
12 (select level
13 from dual
14 connect by level <=
15 months_between(trunc(least(sysdate, date_to), 'mm'), trunc(date_from, 'mm')) + 1
16 ) as sys.odcinumberlist))
17 order by item, year, month;
ITEM MONTH YEAR
----- ---------- ----------
A 1 2018
A 2 2018
A 3 2018
B 5 2021
B 6 2021
B 7 2021
B 8 2021
7 rows selected.
SQL>
Recursive CTEs are the standard SQL approach to this type of problem. In Oracle, this looks like:
with cte(item, fromd, tod) as (
select item, fromd, tod
from t
union all
select item, add_months(fromd, 1), tod
from cte
where add_months(fromd, 1) < last_day(tod)
)
select item, extract(year from fromd) as year, extract(month from fromd) as month
from cte
order by item, fromd;
Here is a db<>fiddle.

cumulative using case statement in Oracle's SQL

I have a simple data
Date Count by english count by chinese
08-Mar-19 12 54
09-Mar-19 15 66
10-Mar-19 45 32
11-Mar-19 21 70
12-Mar-19 57 64
29-Mar-19 43 53
30-Mar-19 67 21
I want to group this data by week and the sum should be cumulative.The date starts from 8 march so the week should be calculated that way only. So the result should be
count by english count by chinese
08-MAR-19-14-MAR-19 150 286
15-MAR-19-22-MAR-19 150 286 (no data so same as above)
23-MAR-19-30-MAR-19 260 360
Tried using cumulative and sum but not able to achieve it
You can generate your week ranges, then use an outer join to see which data fits in each week, and use an analytic sum to get the result you want;
with week_ranges (date_from, date_to) as (
select min_date + ((level - 1) * 7), min_date + (level * 7)
from (
select min(some_date) as min_date, ceil((max(some_date) - min(some_date)) / 7) as weeks
from your_table
)
connect by level <= weeks
)
select distinct wr.date_from, wr.date_to - 1 as date_to,
sum(count_english) over (order by wr.date_from) as count_english,
sum(count_chinese) over (order by wr.date_from) as count_chinese
from week_ranges wr
left join your_table yt
on yt.some_date >= wr.date_from
and yt.some_date < wr.date_to
order by date_from;
which with your sample data gets:
DATE_FROM DATE_TO COUNT_ENGLISH COUNT_CHINESE
---------- ---------- ------------- -------------
2019-03-08 2019-03-14 150 286
2019-03-15 2019-03-21 150 286
2019-03-22 2019-03-28 150 286
2019-03-29 2019-04-04 260 360
Note this is splitting it up into four 7-days weeks, rather than one of 7 days and two of 8 days...
db<>fiddle
Here's one option; note that "my weeks" are different than yours because - your data is somewhat inconsistent as they vary from 6 to 7 days. That's also why the final result is different, but the general idea should be OK.
SQL> alter session set nls_date_format = 'dd.mm.yyyy';
Session altered.
SQL> with test (datum, cbe) as
2 -- sample data
3 (select date '2019-03-08', 12 from dual union all
4 select date '2019-03-09', 15 from dual union all
5 select date '2019-03-10', 45 from dual union all
6 select date '2019-03-11', 21 from dual union all
7 select date '2019-03-12', 57 from dual union all
8 select date '2019-03-29', 43 from dual union all
9 select date '2019-03-30', 67 from dual
10 ),
11 span as
12 -- min and max date value, so that we could create a "calendar"
13 (select min(datum) mindat,
14 max(datum) maxdat
15 from test
16 ),
17 periods as
18 -- "calendar" whose periods are weeks
19 (select s.mindat + (level - 1) * 7 datum_from,
20 (s.mindat + level * 7) - 1 datum_to
21 from span s
22 connect by level <= (s.maxdat - s.mindat) / 7 + 1
23 )
24 -- running sum per weeks
25 select distinct
26 p.datum_from,
27 p.datum_to,
28 sum(t.cbe) over (order by p.datum_from) sum_cbe
29 from test t full outer join periods p on t.datum between p.datum_from and p.datum_to
30 order by p.datum_from;
DATUM_FROM DATUM_TO SUM_CBE
---------- ---------- ----------
08.03.2019 14.03.2019 150
15.03.2019 21.03.2019 150
22.03.2019 28.03.2019 150
29.03.2019 04.04.2019 260
SQL>

How to make a time dependent distribution in SQL?

I have an SQL Table in which I keep project information coming from primavera.
Suppose that i have columns for Start Date,End Date,Duration, and Total Qty as shown below .
How can i distribute Total Qty over Months using these information. What kind of additional columns, sql queries i need in order to get correct monthly distribution?
Thanks in Advance.
Columns in order:
itemname,quantity,startdate,duration,enddate
item1 -- 108 -- 2013-03-25 -- 720 -- 2013-07-26
item2 -- 640 -- 2013-03-25 -- 720 -- 2013-07-26
.
.
I think the key is to break the records apart by month. Here is an example of how to do it:
with months as (
select 1 as mon union all select 2 union all select 3 union all
select 4 as mon union all select 5 union all select 6 union all
select 7 as mon union all select 8 union all select 9 union all
select 10 as mon union all select 11 union all select 12
)
select item, m.mon, quantity / nummonths
from (select t.*, (month(enddate) - month(startdate) + 1) as nummonths
from t
) t join
months m
on month(t.startDate) <= m.mon and
months(t.endDate) >= m.mon;
This works because all the months are within the same year -- as in your example. You are quite vague on how the split should be calculated. So, I assumed that every month from the start to the end gets an equal amount.

Query the Minimum Value per day within a month's worth of data

I have two sets of pricing data (A and B). Set A consists of all of my pricing data per order over a month. Set B consists of all of my competitor's pricing data over the same month. I want to compare my competitor's lowest price to each of my prices per day.
Graphically, the data appears like this:
Date:-- Set A: -- Set B:
1---------25---------31
1---------54---------47
1---------23---------56
1---------12---------23
1---------76---------40
1---------42
I want pass only the lowest price to a case statement which evaluates which prices are better. I would like to process an entire month's worth of data all at one time, so in my example, Dates 1 thru 30(1) would be included and crunched all at once, and for each day, there would only be one value from set B included: the lowest price in the set.
Important notes: Set B does not have a datapoint for each point in Set A
Hopefully this makes sense. Thanks in advance for any help you may be able to render.
That's a strange example you have - do you really have prices ranging from 12 to 76 within a single day?
Anyway, left joining your (grouped) data with their (grouped) data should work (untested):
with
my_prices as (
select price_date, min(price_value) min_price from my_prices group by price_date),
their_prices as (
select price_date, min(price_value) min_price from their_prices group by price_date)
select
mine.price_date,
(case
when theirs.min_price is null then mine.min_price
when theirs.min_price >= mine.min_price then mine.min_price
else theirs.min_price
end) min_price
from
my_min_prices mine
left join their_prices theirs on mine.price_date = theirs.price_date
I'm still not sure that I understand your requirements. My best guess is that you want something like
SQL> ed
Wrote file afiedt.buf
1 with your_data as (
2 select 1 date_id, 25 price_a,31 price_b from dual
3 union all
4 select 1, 54, 47 from dual union all
5 select 1, 23, 56 from dual union all
6 select 1, 12, 23 from dual union all
7 select 1, 76, 40 from dual union all
8 select 1, 42, null from dual)
9 select date_id,
10 sum( case when price_a < min_price_b
11 then 1
12 else 0
13 end) better,
14 sum( case when price_a = min_price_b
15 then 1
16 else 0
17 end) tie,
18 sum( case when price_a > min_price_b
19 then 1
20 else 0
21 end) worse
22 from( select date_id,
23 price_a,
24 min(price_b) over (partition by date_id) min_price_b
25 from your_data )
26* group by date_id
SQL> /
DATE_ID BETTER TIE WORSE
---------- ---------- ---------- ----------
1 1 1 4