How to optimize script and handle dynamic data in Oracle SQL? - sql

I need to create 12 months report, which counts values per months. I have made a separate temp table using WITH for each months which counts parts for each aircraft. It takes data from the PARTS table. My table for January looks like this:
type
qty
month
Airbus
248
1
Boeing
120
1
Emb
14
1
Then I count amount of aicrafts each type per months using AC table, here's table for January:
type
qty
month
Airbus
23
1
Boeing
10
1
Emb
5
1
Since I need to find a division of Qty to Count, I implement division Qty / count. So I joined table 1 and table 2 using month column. And combined table for January looks like this:
type
qty
count
div
month
Airbus
248
23
10.7
1
Boeing
120
10
12
1
Emb
14
5
2.8
1
I create temp table for each month and the combine them with UNION ALL. But I am afraid it could lead to DB slowdown. I think I need to rewrite and optimize my script. Any ideas how I could implement that?
Also the data in tables is dynamic and can change. So I need to look only for the last 12 months.
In my script I will have to manually add more months, which is not applicaple.
Is there a way that could possibly solve the problem of optimization and take into account only last 12 months?

Rather than having a sub-query factoring (WITH) clause for each month and trying to use UNION ALL to join them, you can include the month in the GROUP BY clause when you are counting the quantities in the ac and part table.
Since you only provided the intermediate output from the WITH clauses, I've had to try to reverse engineer your original tables to give a query something like this:
SELECT COALESCE(ac.type, p.type) AS type,
COALESCE(ac.month, p.month) AS month,
COALESCE(ac.qty, 0) AS qty,
COALESCE(p.qty, 0) AS count,
CASE p.qty WHEN 0 THEN NULL ELSE ac.qty / p.qty END AS div
FROM (
SELECT type,
TRUNC(date_column, 'MM') AS month,
COUNT(*) AS qty
FROM ac
WHERE date_column >= ADD_MONTHS(SYSDATE, -12)
GROUP BY
type,
TRUNC(date_column, 'MM')
) ac
FULL OUTER JOIN
(
SELECT type,
TRUNC(date_column, 'MM') AS month,
COUNT(*) AS qty
FROM parts
WHERE date_column >= ADD_MONTHS(SYSDATE, -12)
GROUP BY
type,
TRUNC(date_column, 'MM')
) p
ON (
ac.type = p.type
AND ac.month = p.month
)

Related

SQL - Get historic count of rows collected within a certain period by date

For many years I've been collecting data and I'm interested in knowing the historic counts of IDs that appeared in the last 30 days. The source looks like this
id
dates
1
2002-01-01
2
2002-01-01
3
2002-01-01
...
...
3
2023-01-10
If I wanted to know the historic count of ids that appeared in the last 30 days I would do something like this
with total_counter as (
select id, count(id) counts
from source
group by id
),
unique_obs as (
select id
from source
where dates >= DATEADD(Day ,-30, current_date)
group by id
)
select count(distinct(id))
from unique_obs
left join total_counter
on total_counter.id = unique_obs.id;
The problem is that this results would return a single result for today's count as provided by current_date.
I would like to see a table with such counts as if for example I had ran this analysis yesterday, and the day before and so on. So the expected result would be something like
counts
date
1235
2023-01-10
1234
2023-01-09
1265
2023-01-08
...
...
7383
2022-12-11
so for example, let's say that if the current_date was 2023-01-10, my query would've returned 1235.
If you need a distinct count of Ids from the 30 days up to and including each date the below should work
WITH CTE_DATES
AS
(
--Create a list of anchor dates
SELECT DISTINCT
dates
FROM source
)
SELECT COUNT(DISTINCT s.id) AS "counts"
,D.dates AS "date"
FROM CTE_DATES D
LEFT JOIN source S ON S.dates BETWEEN DATEADD(DAY,-29,D.dates) AND D.dates --30 DAYS INCLUSIVE
GROUP BY D.dates
ORDER BY D.dates DESC
;
If the distinct count didnt matter you could likely simplify with a rolling sum, only hitting the source table once:
SELECT S.dates AS "date"
,COUNT(1) AS "count_daily"
,SUM("count_daily") OVER(ORDER BY S.dates DESC ROWS BETWEEN CURRENT ROW AND 29 FOLLOWING) AS "count_rolling" --assumes there is at least one row for every day.
FROM source S
GROUP BY S.dates
ORDER BY S.dates DESC;
;
This wont work though if you have gaps in your list of dates as it'll just include the latest 30 days available. In which case the first example without distinct in the count will do the trick.
SELECT count(*) AS Counts
dates AS Date
FROM source
WHERE dates >= DATEADD(DAY, -30, CURRENT_DATE)
GROUP BY dates
ORDER BY dates DESC

Retrieve Customers with a Monthly Order Frequency greater than 4

I am trying to optimize the below query to help fetch all customers in the last three months who have a monthly order frequency +4 for the past three months.
Customer ID
Feb
Mar
Apr
0001
4
5
6
0002
3
2
4
0003
4
2
3
In the above table, the customer with Customer ID 0001 should only be picked, as he consistently has 4 or more orders in a month.
Below is a query I have written, which pulls all customers with an average purchase frequency of 4 in the last 90 days, but not considering there is a consistent purchase of 4 or more last three months.
Query:
SELECT distinct lines.customer_id Customer_ID, (COUNT(lines.order_id)/90) PurchaseFrequency
from fct_customer_order_lines lines
LEFT JOIN product_table product
ON lines.entity_id= product.entity_id
AND lines.vendor_id= product.vendor_id
WHERE LOWER(product.country_code)= "IN"
AND lines.date >= DATE_SUB(CURRENT_DATE() , INTERVAL 90 DAY )
AND lines.date < CURRENT_DATE()
GROUP BY Customer_ID
HAVING PurchaseFrequency >=4;
I tried to use window functions, however not sure if it needs to be used in this case.
I would sum the orders per month instead of computing the avg and then retrieve those who have that sum greater than 4 in the last three months.
Also I think you should select your interval using "month(CURRENT_DATE()) - 3" instead of using a window of 90 days. Of course if needed you should handle the case of when current_date is jan-feb-mar and in that case go back to oct-nov-dec of the previous year.
I'm not familiar with Google BigQuery so I can't write your query but I hope this helps.
So I've found the solution to this using WITH operator as below:
WITH filtered_orders AS (
select
distinct customer_id ID,
extract(MONTH from date) Order_Month,
count(order_id) CountofOrders
from customer_order_lines` lines
where EXTRACT(YEAR FROM date) = 2022 AND EXTRACT(MONTH FROM date) IN (2,3,4)
group by ID, Order_Month
having CountofOrders>=4)
select distinct ID
from filtered_orders
group by ID
having count(Order_Month) =3;
Hope this helps!
An option could be first count the orders by month and then filter users which have purchases on all months above your threshold:
WITH ORDERS_BY_MONTH AS (
SELECT
DATE_TRUNC(lines.date, MONTH) PurchaseMonth,
lines.customer_id Customer_ID,
COUNT(lines.order_id) PurchaseFrequency
FROM fct_customer_order_lines lines
LEFT JOIN product_table product
ON lines.entity_id= product.entity_id
AND lines.vendor_id= product.vendor_id
WHERE LOWER(product.country_code)= "IN"
AND lines.date >= DATE_SUB(CURRENT_DATE() , INTERVAL 90 DAY )
AND lines.date < CURRENT_DATE()
GROUP BY PurchaseMonth, Customer_ID
)
SELECT
Customer_ID,
AVG(PurchaseFrequency) AvgPurchaseFrequency
FROM ORDERS_BY_MONTH
GROUP BY Customer_ID
HAVING COUNT(1) = COUNTIF(PurchaseFrequency >= 4)

Postgresql Sum statistics by month for a single year

I have the following table in the database:
item_id, integer
item_name, character varying
price, double precision
user_id, integer
category_id, integer
date, date
1
Pizza
2.99
1
2
'2020-01-01'
2
Cinema
5
1
3
'2020-01-01'
3
Cheeseburger
4.99
1
2
'2020-01-01'
4
Rental
100
1
1
'2020-01-01'
Now I want to get the statistics for the total price for each month in a year. It should include all items as well as a single category both for all the time and specified time period. For example, using this
SELECT EXTRACT(MONTH from date),COALESCE(SUM(price), 0)
FROM item_table
WHERE user_id = 1 AND category_id = 3 AND date BETWEEN '2020-01-01'AND '2021-01-01'
GROUP By date_part
ORDER BY date_part;
I expect to obtain this:
date_part
total
1
5
2
0
3
0
...
...
12
0
However, I get this:
date_part
total
1
5
1) How can I get zero value for a case when no items for a specified category are found? (now it just skips the month)
2) The above example gives the statistics for the selected category within some time period. For all my purposes I need to write 3 more queries (select for all time and all categories/ all the time single category/ single year all categories). Is there a unique query for all these cases? (when some parameters like category_id or date are null )
You can get the "empty" months by doing a right join against a table that contains the month numbers and moving the WHERE criteria into the JOIN criteria:
-- Create a temporary "table" for month numbers
WITH months AS (SELECT * FROM generate_series(1, 12) AS t(n))
SELECT months.n as date_part, COALESCE(SUM(price), 0) AS total
FROM item_table
RIGHT JOIN months ON EXTRACT(MONTH from date) = months.n
AND user_id = 1 AND category_id = 3 AND "date" BETWEEN '2020-01-01'AND '2021-01-01'
GROUP BY months.n
ORDER By months.n;
I'm not quite sure what you want from your second part, but you could take a look at Grouping Sets, Cube and Rollup.

Get consecutive months and days difference from date range?

So let's say I have a table like this:
subscriber_id
package_id
package_start_date
package_end_date
package_price_per_day
1081
231
2014-01-13
2014-12-31
$3.
1084
231
2014-03-21
2014-06-05
$3
1086
235
2014-06-21
2014-09-09
$4
Now I want the result for top 3 packages based on total revenue for each month for year 2014.
Note: For example for package 231 Revenue should be calculated such as 18 days of Jan * $3 +
28 days of feb * $3 + .... and so on.
For the second row the calculation would be same as first row (9 days of March* $3 + 30 days of April *$3 ....)
On the result the package should group by according to month and show rank depending on total revenue.
Sample result:
Month
Package_id
Revenue
Rank
Jan
231.
69499
1.
Jan.
235.
34345.
2.
Jan.
238.
23455.
3.
Feb.
231.
89274
1.
I wrote a query to filter the dates so that I get the active subscriber throughout the year 2014 (since initially there were values from different years),which shows the first table in the question, but I am not sure how do I break the months and days afterwards.
select subscriber_id, package_id, package_start_date, package_end_date
from (
select subscriber_id, package_id
, case when year(package_start_date) < '2014' then package_start_date = '01-Jan-2014' else package_start_date end as package_start_date
, case when year(package_start_date) > '2014' then package_end_date = '31-Dec-2014' else package_start_date end as package_end_date
, price_per_day
from subscription
) a
where year(package_start_date) = '2014' and year(package_end_date) = '2014'
Please do not emphasize on syntax - I am just trying to understand the logical approach in SQL.
Suppose you have a table that is a list of unique dates in a column called d, and the table is called d
It is then relatively trivial to do
SELECT *
FROM t
INNER JOIN d on d.d >= t.package_start_date AND d.d < t.package_end_date
Assuming you class a start date of jan 1 and an end date of jan 2 as 1 day. If you class as two, use <=
This will cause your package rows to multiply into the number of days, so start and end days of jan 1 and jan 11 would mean that row repeats 10 times. The d.d date is different on every row and you can extract the month from d.d and then group on it to give you totals for each month per package
Suppose you've CTEd that query above as x, it's like
SELECT DATEPART(month, x.dd), --the d.d date
package_id,
SUM(revenue)
FROM x
GROUP BY DATEPART(month, x.dd), package_id
Because the rows from T are repeated by Cartesian explosion when joined to d, you can safely group them or aggregate them to get them back to single values per month per package. If you have packages that stay with you more than a year you should also group on datepart year, to avoid mixing up the months from packages that stay from eg jan 2020 to feb 2021(they stay for two jans and two febs)
Then all you need to do is add the ranking of the revenue in, which looks like it would go in at the first step with something like
RANK(DATEDIFF(DAY, start, end)*revenue) OVER(PARTITION BY package_id)
I think I understand it correctly that you rank packages on total revenue over the entire period rather than per month.. look up the difference between rank and dense rank too as you may want dense instead

Expected payments by day given start and end date

I'm trying to create a SQL view that gives me the expected amount to be received by calendar day for recurring transactions. I have a table containing recurring commitments data, with the following columns:
id,
start_date,
end_date (null if still active),
payment day (1,2,3,etc.),
frequency (monthly, quarterly, semi-annually, annually),
commitment amount
For now, I do not need to worry about business days vs calendar days.
In its simplest form, the end result would contain every historical calendar day as well as future dates for the next year, and produce how much was/is expected to be received in those particular days.
I've done quite a bit of researching, but cannot seem to find an answer that addresses the specific problem. Any direction on where to start would be greatly appreciated.
The expect output would look something like this:
| Date | Expected Amount |
|1/1/18 | 100 |
|1/2/18 | 200 |
|1/3/18 | 150 |
Thank you ahead of time!
Link to data table in db-fiddle
Expected Output Spreadsheet
It's something like this, but I've never used Netezza
SELECT
cal.d, sum(r.amount) as expected_amount
FROM
(
SELECT MIN(a.start_date) + ROW_NUMBER() OVER(ORDER BY NULL) as d
FROM recurring a, recurring b, recurring c
) cal
LEFT JOIN
recurring r
ON
(
(r.frequency = 'monthly' AND r.payment_day = DATE_PART('DAY', cal.d)) OR
(r.frequency = 'annually' AND DATE_PART('MONTH', cal.d) = DATE_PART('MONTH', r.start_date) AND r.payment_day = DATE_PART('DAY', cal.d))
) AND
r.start_date >= cal.d AND
(r.end_date <= cal.d OR r.end_date IS NULL)
GROUP BY cal.d
In essence, we cartesian join our recurring table together a few times to generate a load of rows, number them and add the number onto the min date to get an incrementing date series.
The payments data table is left joined onto this incrementing date series on:
(the day of the date from the series) = (the payment day) for monthlies
(the month-day of the date from the series) = (the month and payment day of the start_date)
Finally, the whole lot is grouped and summed
I don't have a test instance of Netezza so if you encounter some minor syntax errors, do please have a stab at fixing them up yourself (to make it faster for you to get a solution). If you reach a point where you can't work out what the query is doing, let me know
Disclaimer: I'm no expert on Netezza, so I decided to write you a standard SQL that may need some tweaking to run on Netezza.
with
digit as (select 0 as x union select 1 union select 2 union select 3 union select 4
union select 5 union select 6 union select 7 union select 8 union select 9
),
number as ( -- produces numbers from 0 to 9999 (28 years)
select d1.x + d2.x * 10 + d3.x * 100 + d4.x * 1000 as n
from digit d1
cross join digit d2
cross join digit d3
cross join digit d4
),
expected_payment as ( -- expands all expected payments
select
c.start_date + nb.n as day,
c.committed_amount
from recurring_commitement c
cross join number nb
where c.start_date + nb.n <= c.end_data
and c.frequency ... -- add logic for monthly, quarterly, etc. here
)
select
day,
sum(committed_amout) as expected_amount
from expected_payment
group by day
order by day
This solution is valid for commitments that do not exceed 28 years, since the number CTE (Common Table Expression) is producing up to a maximum of 9999 days. Expand with a fifth digit if you need longer commitments.
Note: I think the way I'm adding days to a day to a date is not correct in Netezza's SQL. The expression c.start_date + nb.n may need to be rephrased.