I want to use post and pre revenue of an interaction to calculate net revenue. Sometimes there are multiple customers in an interaction. The data is like:
InteractionID | Customer ID | Pre | Post
--------------+-------------+--------+--------
1 | ab12 | 10 | 30
2 | cd12 | 40 | 15
3 | de12;gh12 | 15;30 | 20;10
Expected output is to take sum in pre and post call to calculate net
InteractionID | Customer ID | Pre | Post | Net
--------------+---------------+--------+-------+------
1 | ab12 | 10 | 30 | 20
2 | cd12 | 40 | 15 | -25
3 | de12;gh12 | 45 | 30 | -15
How do I get the net revenue column?
The proper solution is to normalize your relational design by adding a separate table for customers and their respective pre and post.
While stuck with the current design, this would do it:
SELECT *, post - pre AS net
FROM (
SELECT interaction_id, customer_id
,(SELECT sum(x::numeric) FROM string_to_table(pre, ';') x) AS pre
,(SELECT sum(x::numeric) FROM string_to_table(post, ';') x) AS post
FROM tbl
) sub;
db<>fiddle here
string_to_table() requires at least Postgres 14.
You did not declare your Postgres version, so I assume the current version Postgres 14.
For older versions replace with regexp_split_to_table() or unnest(string_to array)).
Say in MonetDB (specifically, the embedded version from the "MonetDBLite" R package) I have a table "events" containing entity ID codes and event start and end dates, of the format:
| id | start_date | end_date |
| 1 | 2010-01-01 | 2010-03-30 |
| 1 | 2010-04-01 | 2010-06-30 |
| 2 | 2018-04-01 | 2018-06-30 |
| ... | ... | ... |
The table is approximately 80 million rows of events, attributable to approximately 2.5 million unique entities (ID values). The dates appear to align nicely with calendar quarters, but I haven't thoroughly checked them so assume they can be arbitrary. However, I have at least sense-checked them for end_date > start_date.
I want to produce a table "nonevent_qtrs" listing calendar quarters where an ID has no event recorded, e.g.:
| id | last_doq |
| 1 | 2010-09-30 |
| 1 | 2010-12-31 |
| ... | ... |
| 1 | 2018-06-30 |
| 2 | 2010-03-30 |
| ... | ... |
(doq = day of quarter)
If the extent of an event spans any days of the quarter (including the first and last dates), then I wish for it to count as having occurred in that quarter.
To help with this, I have produced a "calendar table"; a table of quarters "qtrs", covering the entire span of dates present in "events", and of the format:
| first_doq | last_doq |
| 2010-01-01 | 2010-03-30 |
| 2010-04-01 | 2010-06-30 |
| ... | ... |
And tried using a non-equi merge like so:
create table nonevents
as select
id,
last_doq
from
events
full outer join
qtrs
on
start_date > last_doq or
end_date < first_doq
group by
id,
last_doq
But this is a) terribly inefficient and b) certainly wrong, since most IDs are listed as being non-eventful for all quarters.
How can I produce the table "nonevent_qtrs" I described, which contains a list of quarters for which each ID had no events?
If it's relevant, the ultimate use-case is to calculate runs of non-events to look at time-till-event analysis and prediction. Feels like run length encoding will be required. If there's a more direct approach than what I've described above then I'm all ears. The only reason I'm focusing on non-event runs to begin with is to try to limit the size of the cross-product. I've also considered producing something like:
| id | last_doq | event |
| 1 | 2010-01-31 | 1 |
| ... | ... | ... |
| 1 | 2018-06-30 | 0 |
| ... | ... | ... |
But although more useful this may not be feasible due to the size of the data involved. A wide format:
| id | 2010-01-31 | ... | 2018-06-30 |
| 1 | 1 | ... | 0 |
| 2 | 0 | ... | 1 |
| ... | ... | ... | ... |
would also be handy, but since MonetDB is column-store I'm not sure whether this is more or less efficient.
Let me assume that you have a table of quarters, with the start date of a quarter and the end date. You really need this if you want the quarters that don't exist. After all, how far back in time or forward in time do you want to go?
Then, you can generate all id/quarter combinations and filter out the ones that exist:
select i.id, q.*
from (select distinct id from events) i cross join
quarters q left join
events e
on e.id = i.id and
e.start_date <= q.quarter_end and
e.end_date >= q.quarter_start
where e.id is null;
I have a Production Table and a Standing Data table. The relationship of Production to Standing Data is actually Many-To-Many which is different to how this relationship is usually represented (Many-to-One).
The standing data table holds a list of tasks and the score each task is worth. Tasks can appear multiple times with different "ValidFrom" dates for changing the score at different points in time. What I am trying to do is query the Production Table so that the TaskID is looked up in the table and uses the date it was logged to check what score it should return.
Here's an example of how I want the data to look:
Production Table:
+----------+------------+-------+-----------+--------+-------+
| RecordID | Date | EmpID | Reference | TaskID | Score |
+----------+------------+-------+-----------+--------+-------+
| 1 | 27/02/2020 | 1 | 123 | 1 | 1.5 |
| 2 | 27/02/2020 | 1 | 123 | 1 | 1.5 |
| 3 | 30/02/2020 | 1 | 123 | 1 | 2 |
| 4 | 31/02/2020 | 1 | 123 | 1 | 2 |
+----------+------------+-------+-----------+--------+-------+
Standing Data
+----------+--------+----------------+-------+
| RecordID | TaskID | DateActiveFrom | Score |
+----------+--------+----------------+-------+
| 1 | 1 | 01/02/2020 | 1.5 |
| 2 | 1 | 28/02/2020 | 2 |
+----------+--------+----------------+-------+
I have tried the below code but unfortunately due to multiple records meeting the criteria, the production data duplicates with two different scores per record:
SELECT p.[RecordID],
p.[Date],
p.[EmpID],
p.[Reference],
p.[TaskID],
s.[Score]
FROM ProductionTable as p
LEFT JOIN StandingDataTable as s
ON s.[TaskID] = p.[TaskID]
AND s.[DateActiveFrom] <= p.[Date];
What is the correct way to return the correct and singular/scalar Score value for this record based on the date?
You can use apply :
SELECT p.[RecordID], p.[Date], p.[EmpID], p.[Reference], p.[TaskID], s.[Score]
FROM ProductionTable as p OUTER APPLY
( SELECT TOP (1) s.[Score]
FROM StandingDataTable AS s
WHERE s.[TaskID] = p.[TaskID] AND
s.[DateActiveFrom] <= p.[Date]
ORDER BY S.DateActiveFrom DESC
) s;
You might want score basis on Record Level if so, change the where clause in apply.
I have a situation where I have about 4000 tasks that all have different periodic rules for occurences.
They are preventive maintenance tasks. The table I get them from only provides me the start date and frequency of occurence in units of weeks.
Example:
Task (A) is scheduled to occur every two weeks, starting on week 1 of 2015.
Task (B) is scheduled to occur every 6 weeks, starting on week 2 of 2011.
And so on...
What I need to do is produce a resultset that contains a record for each occurence since the start point, for each task.
It's like generating a sequence.
Example:
Task | Year | Week
------|-------|-------
A | 2015 | 1
A | 2015 | 3
A | 2015 | 5
A | 2015 | 7
[...]
B | 2011 | 2
B | 2011 | 8
And so on...
You probably think "hey, that is simple, just put it in a loop then your good."
Not so fast!
The trick is that I need this to be within one SQL query.
I know I probably should be doing it in a stored procedure or a function. But I can't, for now. I could also do it in some VbA code since it will go in an Excel spreadsheet. But Excel has become an unstable product lately and I do not want to risk my code to fail after an update from Microsoft. So I try as much as possible to stay within the limits of IBM i5OS SQL queries.
I know the answer could be that it is impossible. But I believe in this community.
Thanks in advance,
EDIT :
I have found this post where it shows how to list dates within a range.
IBM DB2: Generate list of dates between two dates
I tried to generate a list of dates based on periodicity and it worked.
I am still struggling on the generation of multiple sequences based on multiple periodicity.
Here's the code I have so far:
SELECT d.min + num.n DAYS AS DATES
FROM
(VALUES(DATE('2017-01-01'), DATE('2017-03-01'))) AS d(min, max)
JOIN
(
-- Creates a table of numbers based on periodicity
SELECT
n1.n + n10.n + n100.n AS n
FROM
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n)
CROSS JOIN
(VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n)
CROSS JOIN
(VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n)
-- I just need to replace the 2nd argument by the desired frequency */
WHERE MOD(n1.n+n10.n+n100.n, 6 )=0
ORDER BY n1.n + n10.n + n100.n
) num
ON
d.min + num.n DAYS<= d.max
ORDER BY num.n
In other words, I need the dates in table d to be dynamic as well as the periodicity (6) in num's table WHERE clause.
Should I be using a WITH statement? If so, can someone please guide me because I am not very used to this kind of statement.
EDIT#2:
Here is the table structure I'm working with:
TABLE NAME: SGTRCDP (Programmed Tasks):
| | Start | Start | Freq.
Asset | Task | Year | Week | (week)
--------------|------------|----------|----------|----------
TMPC531 | VER0560 | 2011 | 10 | 26
BAT0404 | IPNET030 | 2011 | 2 | 4
B-EXTINCT-151 | 001H-0011 | 2014 | 15 | 17
[...] | [...] | [...] | [...] | [...]
4000 more like these, the unique key being combination of `Asset` and `Task` fields.
What I would like to have is this:
Asset | Task | Year | Week
--------------|------------|----------|----------
TMPC531 | VER0560 | 2011 | 10
TMPC531 | VER0560 | 2011 | 36
TMPC531 | VER0560 | 2012 | 10
TMPC531 | VER0560 | 2012 | 36
TMPC531 | VER0560 | 2013 | 10
TMPC531 | VER0560 | 2013 | 36
TMPC531 | VER0560 | 2014 | 10
TMPC531 | VER0560 | 2014 | 36
TMPC531 | VER0560 | 2015 | 10
TMPC531 | VER0560 | 2015 | 36
TMPC531 | VER0560 | 2016 | 10
TMPC531 | VER0560 | 2016 | 36
TMPC531 | VER0560 | 2017 | 10
TMPC531 | VER0560 | 2017 | 36
BAT0404 | IPNET030 | 2011 | 2
BAT0404 | IPNET030 | 2011 | 6
BAT0404 | IPNET030 | 2011 | 10
BAT0404 | IPNET030 | 2011 | 14
BAT0404 | IPNET030 | 2011 | 18
BAT0404 | IPNET030 | 2011 | 22
BAT0404 | IPNET030 | 2011 | 26
BAT0404 | IPNET030 | 2011 | 30
BAT0404 | IPNET030 | 2011 | 34
BAT0404 | IPNET030 | 2011 | 38
[...] | [...] | [...] | [...]
BAT0404 | IPNET030 | 2017 | 34
BAT0404 | IPNET030 | 2017 | 38
B-EXTINCT-151 | 001H-0011 | 2014 | 15
B-EXTINCT-151 | 001H-0011 | 2014 | 32
B-EXTINCT-151 | 001H-0011 | 2014 | 49
B-EXTINCT-151 | 001H-0011 | 2015 | 14
B-EXTINCT-151 | 001H-0011 | 2015 | 31
[...] | [...] | [...] | [...]
B-EXTINCT-151 | 001H-0011 | 2017 | 8
B-EXTINCT-151 | 001H-0011 | 2017 | 24
I was able to make it using CTE, but it generates so many records that whenever I want to filter or order data, it takes forever. Same goes for downloading the whole resultset.
And I wouldn't risk creating a temporary table and bust the disk space.
Another caveat of CTE, is that It cannot be referenced as a subquery.
And guess what, my plan was to use it as a subquery in FROM clause of a SELECT joining it with the actual work orders table and do Asset-Task-Year-Week matching to see if the programmed tasks were executed as planned or not.
Anyway, here is the CTE I used to get it:
WITH PPM (EQ, TASK, FREQ, OCCYR, OCCWK, OCCDAT, NXTDAT) AS
(
SELECT
TRCD.DLACCD EQ,
TRCD.DLJ1CD TASK,
INT(SUBSTR(TRCD.DLL1TX,9,3)) FREQ,
AOAGNB OCCYR,
AOAQNB OCCWK,
CASE
WHEN aoaddt/1000000 >= 1 THEN
DATE('20'||substr(aoaddt,2,2)||'-'||substr(aoaddt,4,2)||'-'||substr(aoaddt,6,2))
ELSE
DATE('19'||substr(aoaddt,1,2)||'-'||substr(aoaddt,3,2)||'-'||substr(aoaddt,5,2))
END OCCDAT,
(CASE
WHEN aoaddt/1000000 >= 1 THEN
DATE('20'||substr(aoaddt,2,2)||'-'||substr(aoaddt,4,2)||'-'||substr(aoaddt,6,2))
ELSE
DATE('19'||substr(aoaddt,1,2)||'-'||substr(aoaddt,3,2)||'-'||substr(aoaddt,5,2))
END + (INT(SUBSTR(TRCD.DLL1TX,9,3)) * 7) DAYS) NXTDAT
FROM
(SELECT * FROM SGTRCDP WHERE DLIMST<>'H' AND TRIM(DLK5Cd)='S') TRCD
JOIN
(
SELECT
AOAGNB,
AOAQNB,
min(AOADDT) aoaddt
FROM SGCALDP
GROUP BY AOAGNB, AOAQNB
) CLND
ON AOAGNB=SUBSTR(TRCD.DLL1TX,1,4) AND AOAQNB=INT(SUBSTR(TRCD.DLL1TX,12,2))
WHERE DLACCD='CON0539' AND DLJ1CD='CON0539-04'
UNION ALL
SELECT
PPMNXT.EQ,
PPMNXT.TASK,
PPMNXT.FREQ,
AOAGNB OCCYR,
AOAQNB OCCWK,
CASE
WHEN aoaddt/1000000 >= 1 THEN
DATE('20'||substr(aoaddt,2,2)||'-'||substr(aoaddt,4,2)||'-'||substr(aoaddt,6,2))
ELSE
DATE('19'||substr(aoaddt,1,2)||'-'||substr(aoaddt,3,2)||'-'||substr(aoaddt,5,2))
END OCCDAT,
(CASE
WHEN aoaddt/1000000 >= 1 THEN
DATE('20'||substr(aoaddt,2,2)||'-'||substr(aoaddt,4,2)||'-'||substr(aoaddt,6,2))
ELSE
DATE('19'||substr(aoaddt,1,2)||'-'||substr(aoaddt,3,2)||'-'||substr(aoaddt,5,2))
END + (PPMNXT.FREQ * 7) DAYS) NXTDAT
FROM
PPM
PPMNXT
JOIN
(
SELECT
AOAGNB,
AOAQNB,
min(AOADDT) aoaddt
FROM SGCALDP
GROUP BY AOAGNB, AOAQNB
) CLND
ON AOAGNB=YEAR(PPMNXT.NXTDAT) AND AOAQNB=WEEK_ISO(PPMNXT.NXTDAT)
WHERE
YEAR(CASE
WHEN aoaddt/1000000 >= 1 THEN
DATE('20'||substr(aoaddt,2,2)||'-'||substr(aoaddt,4,2)||'-'||substr(aoaddt,6,2))
ELSE
DATE('19'||substr(aoaddt,1,2)||'-'||substr(aoaddt,3,2)||'-'||substr(aoaddt,5,2))
END + (PPMNXT.FREQ * 7) DAYS) <= YEAR(CURRENT_DATE)
)
SELECT EQ, TASK, OCCYR, OCCWK, OCCDAT FROM PPM
That was the best I could do.
You will notice that I set a root to a specific Asset and Task:
WHERE DLACCD='CON0539' AND DLJ1CD='CON0539-04'
Normally I would not filter data in order to retrieve all the scheduled weeks for each tasks. I had to filter on one root key to avoid the query to eventually eat up resources make our AS/400 crash.
Again, I am not an expert in CTEs, there might be a better solution.
Thanks
I have a query that returns the credit notes (CN) and debit notes (DN) of an operation, each CN is accompanied by two or more DN (referenced by the field payment_plan_id). At the time of paging, for example I must bring 10 operations, that is 10 CN and their DN, but if I leave the limit at 10, it will also count the debit notes of the transaction that I must return in the query. So, it will only bring me 2, 3 or 4 operations depending on the number of DNs that accompany the credit note.
SELECT
value, installment, payment_plan_id, model,
creation_date, operation
FROM payment_plant
WHERE model != 'IMMEDIATE'
AND operation IN ('CN', 'DN')
AND creation_date BETWEEN '2017-06-12' AND '2017-07-12 23:59:59'
ORDER BY
model,
creation_date,
operation
LIMIT 10
OFFSET 1
Example of the table obviating some fields:
| id | payment_plan_id | value | installment | operation |
---------------------------------------------------------
| 1 | b3cdaede | 12 | 1 | NC |
| 2 | b3cdaede | 3.5 | 1 | ND |
| 3 | b3cdaede | 1.2 | 1 | ND |
| 4 | e1d7f051 | 36 | 1 | NC |
| 5 | e1d7f051 | 5.9 | 1 | ND |
| 6 | 00e6a0b4 | 15 | 1 | NC |
| 7 | 00e6a0b4 | 1 | 1 | ND |
| 8 | 00e6a0b4 | 3.6 | 1 | ND |
How can I limit the Limit so that it only consider the NCs?
Well, the query you give above doesn't do remotely what you describe. Assuming you actually want "the last 10 CN and their DN". You also don't explain what fields CN and DN have in common, so I'm going to assume that the fields are payment_plan_id and installment. Given that here's how you would get it:
WITH last_10_cn AS (
SELECT
value, installment, payment_plan_id, model,
creation_date
FROM payment_plant
WHERE model != 'IMMEDIATE'
AND operation = 'CN'
AND creation_date BETWEEN '2017-06-12' AND '2017-07-12 23:59:59'
ORDER BY
model,
creation_date,
operation
LIMIT 10
OFFSET 1 )
SELECT last_10_cn.*,
dn.value as dn_value, dn.model as dn_model,
dn.creation_date as dn_creation_date
FROM last_10_cn JOIN payment_plant as dn
ON last_10_cn.payment_plan_id = dn.payment_plan_id
AND last_10_cn.installment = dn.installment
ORDER BY
last_10_cn.model,
last_10_cn.creation_date,
last_10_cn.operation
dn.creation_date;
Adjust the above according to the actual join conditions and how you really want things to be sorted.
BTW, your table structure is what's giving you trouble here. DNs should really be a separate table with a foreign key to CNs. I realize that's not how most GLs do it, but the GL model predates relational databases.