Oracle SQL Month Statement Generation - sql

I am having performance issue on a set of SQLs to generate current month's statement in realtime.
Customers will purchase some goods using points from an online system, and the statement containing "open_balance", "point_earned", "point_used", "current_balance" should be generated.
The following shows the shortened schema :
//~200k records
customer: {account_id:string, create_date:timestamp, bill_day:int} //totally 14 fields
//~250k records per month, kept for 6 month
history_point: {point_id:long, account_id:string, point_date:timestamp, point:int} //totally 9 fields
//each customer have maximum of 12 past statements kept
history_statement: {account_id:string, open_date:date, close_date:date, open_balance:int, point_earned:int, point_used:int, close_balance:int} //totally 9 fields
On every bill day, the view should automatically create a new month statement.
i.e. If bill_day is 15, then transaction done on or after 16 Dec 2013 00:00:00 should belongs to new bill cycle of 16 Dec 2013 00:00:00 - 15 Jan 2014 23:59:59
I tried the approach described below,
Calculate the last close day for each account (in materialized view, so that it update only after there is new customer or past month statement inserted into history_statement)
Generate a record for each customer each month that I need to calculate (Also in materialized view)
Sieve the point record for only point records within the date that I will calculate (This takes ~0.1s only)
Join 2 with 3 to obtain point earned and used for each customer each month
Join 4 with 4 on date less than open date to sum for open and close balance
6a. Select from 5 where open date is less than 1 month old as current balance (these are not closed yet, and the point reflect the point each customer own now)
6b. All the statements are obtained by union of history_statement and 5
On a development server, the average response time (200K customer, 1.5M transactions in current month) is ~3s which is pretty slow for web application, and on the testing server, where resources are likely to be shared, the average response time (200K customer, ~200k transaction each month for 8 months) is 10-15s.
Does anyone have some idea on writing a query with better approach or to speed up the query?
Related SQL:
2: IV_STCLOSE_2_1_T(Materialized view)
3: IV_STCLOSE_2_2_T (~0.15s)
SELECT ACCOUNT_ID, POINT_DATE, POINT
FROM history_point
WHERE point_date >= (
SELECT MIN(open_date)
FROM IV_STCLOSE_2_1_t
)
4: IV_STCLOSE_3_T (~1.5s)
SELECT p0.account_id, p0.open_date, p0.close_date, COALESCE(SUM(DECODE(SIGN(p.point),-1,p.point)),0) AS point_used, COALESCE(SUM(DECODE(SIGN(p.point),1,p.point)),0) AS point_earned
FROM iv_stclose_2_1_t p0
LEFT JOIN iv_stclose_2_2_t p
ON p.account_id = p0.account_id
AND p.point_date >= p0.open_date
AND p.point_date < p0.close_date + INTERVAL '1' DAY
GROUP BY p0.account_id, p0.open_date, p0.close_date
5: IV_STCLOSE_4_T (~3s)
WITH t AS (SELECT * FROM IV_STCLOSE_3_T)
SELECT t1.account_id AS STAT_ACCOUNT_ID, t1.open_date, t1.close_date, t1.open_balance, t1.point_earned AS point_earn, t1.point_used , t1.open_balance + t1.point_earned + t1.point_used AS close_balance
FROM (
SELECT v1.account_id, v1.open_date, v1.close_date, v1.point_earned, v1.point_used, COALESCE(sum(v2.point_used + v2.point_earned),0) AS OPEN_BALANCE
FROM t v1
LEFT JOIN t v2
ON v1.account_id = v2.account_id
AND v1.OPEN_DATE > v2.OPEN_DATE
GROUP BY v1.account_id, v1.open_date, v1.close_date, v1.point_earned, v1.point_used
) t1

It turns out to be that in IV_STCLOSE_4_T
WITH t AS (SELECT * FROM IV_STCLOSE_3_T)
is problematic.
At first thought WITH t AS would be faster as IV_STCLOSE_3_T is only evaluated once, but it apparently forced materializing the whole IV_STCLOSE_3_T, generating over 200k records despite I only need at most 12 of them from a single customer at any time.
With the above statement removed and appropriately indexing account_id, the cost reduced from over 500k to less than 500.

Related

How to carry over latest observed record when grouping by on SQL?

(I've created a similar question before, but I messed it up beyond repair. Hopefully, I can express myself better this time.)
I have a table containing records that change through time, each row representing a modification in Stage and Amount. I need to group these records by Day and Stage, summing up the Amount.
The tricky part is: ids might not change in some days. Since there won't be any record in those days, so I need to carry over the latest record observed.
Find below the records table and the expected result. MRE on dbfiddle (PostgreSQL)
Records
Expected Result
I created this basic visualization to demonstrate how the Amounts and Stages change throughout the days. Each number/color change represents a modification.
The logic behind the expected result can be found below.
Total Amount by Stage on Day 2
Id A was modified on Day 2, let's take that Amount: Negotiation 60.
Id B wasn't modified on Day 2, so we carry over the most recent modification (Day 1): Open 10.
Open 10
Negotiation 60
Closed 0
Total Amount by Stage on Day 3
Id A wasn't modified on Day 3, so we carry over the most recent modification (Day 2): Negotiation 60.
Id A was modified on Day 3: Negotiation 30
Total Amount by Stage on Day 3
Open 0
Negotiation 90
Closed 0
Basically, you seem to want the most recent value for each id --- and it only gets counted for the most recent stage.
You can get this using a formulation like this:
select d.DateDay, s.stage, coalesce(sh.amount, 0)
from (select distinct sh.DateDay from stage_history sh) d cross join
(select distinct sh.stage from stage_history sh) s left join lateral
(select sum(sh.amount) as amount
from (select distinct on (sh.id) sh.*
from stage_history sh
where sh.DateDay <= d.DateDay
order by sh.id, sh.DateDay desc
) sh
where sh.stage = s.stage
) sh
on 1=1
order by d.DateDay, s.stage;
Here is a db<>fiddle.

oracle sql: efficient way to calculate business days in a month

I have a pretty huge table with columns dates, account, amount, etc. eg.
date account amount
4/1/2014 XXXXX1 80
4/1/2014 XXXXX1 20
4/2/2014 XXXXX1 840
4/3/2014 XXXXX1 120
4/1/2014 XXXXX2 130
4/3/2014 XXXXX2 300
...........
(I have 40 months' worth of daily data and multiple accounts.)
The final output I want is the average amount of each account each month. Since there may or may not be record for any account on a single day, and I have a seperate table of holidays from 2011~2014, I am summing up the amount of each account within a month and dividing it by the number of business days of that month. Notice that there is very likely to be record(s) on weekends/holidays, so I need to exclude them from calculation. Also, I want to have a record for each of the date available in the original table. eg.
date account amount
4/1/2014 XXXXX1 48 ((80+20+840+120)/22)
4/2/2014 XXXXX1 48
4/3/2014 XXXXX1 48
4/1/2014 XXXXX2 19 ((130+300)/22)
4/3/2014 XXXXX2 19
...........
(Suppose the above is the only data I have for Apr-2014.)
I am able to do this in a hacky and slow way, but as I need to join this process with other subqueries, I really need to optimize this query. My current code looks like:
<!-- language: lang-sql -->
select
date,
account,
sum(amount/days_mon) over (partition by last_day(date))
from(
select
date,
-- there are more calculation to get the account numbers,
-- so this subquery is necessary
account,
amount,
-- this is a list of month-end dates that the number of
-- business days in that month is 19. similar below.
case when last_day(date) in ('','',...,'') then 19
when last_day(date) in ('','',...,'') then 20
when last_day(date) in ('','',...,'') then 21
when last_day(date) in ('','',...,'') then 22
when last_day(date) in ('','',...,'') then 23
end as days_mon
from mytable tb
inner join lookup_businessday_list busi
on tb.date = busi.date)
So how can I perform the above purpose efficiently? Thank you!
This approach uses sub-query factoring - what other RDBMS flavours call common table expressions. The attraction here is that we can pass the output from one CTE as input to another. Find out more.
The first CTE generates a list of dates in a given month (you can extend this over any range you like).
The second CTE uses an anti-join on the first to filter out dates which are holidays and also dates which aren't weekdays. Note that Day Number varies depending according to the NLS_TERRITORY setting; in my realm the weekend is days 6 and 7 but SQL Fiddle is American so there it is 1 and 7.
with dates as ( select date '2014-04-01' + ( level - 1) as d
from dual
connect by level <= 30 )
, bdays as ( select d
, count(d) over () tot_d
from dates
left join holidays
on dates.d = holidays.hol_date
where holidays.hol_date is null
and to_number(to_char(dates.d, 'D')) between 2 and 6
)
select yt.account
, yt.txn_date
, sum(yt.amount) over (partition by yt.account, trunc(yt.txn_date,'MM'))
/tot_d as avg_amt
from your_table yt
join bdays
on bdays.d = yt.txn_date
order by yt.account
, yt.txn_date
/
I haven't rounded the average amount.
You have 40 month of data, this data should be very stable.
I will assume that you have a cold body (big and stable easily definable range of data) and hot tail (small and active part).
Next, I would like to define a minimal period. It is a data range that is a smallest interval interesting for Business.
It might be year, month, day, hour, etc. Do you expect to get questions like "what was averege for that account between 1900 and 12am yesterday?".
I will assume that the answer is DAY.
Then,
I will calculate sum(amount) and count() for every account for every DAY of cold body.
I will not create a dummy records, if particular account had no activity on some day.
and I will save day, account, total amount, count in a TABLE.
if there are modifications later to the cold body, you delete and reload affected day from that table.
For hot tail there might be multiple strategies:
Do the same as above (same process, clear to support)
always calculate on a fly
use materialized view as an averege between 1 and 2.
Cold body table totalc could also be implemented as materialized view, but if data never change - no need to rebuild it.
With this you go from (number of account) x (number of transactions per day) x (number of days) to (number of account)x(number of active days) number of records.
That should speed up all following calculations.

To display only previous three months even the months before is not exist in database

Below is my new sql so far as i do not manage to use Dale M advice,
SELECT
all_months.a_month_id AS month,
year($P{date}) as year,
count(case when clixsteraccount.rem_joindate between DATE_FORMAT($P{date}-INTERVAL 2 MONTH, '%Y-%m-01') AND $P{date} THEN clixsteraccount.rem_registerbycn end) AS
total_activation,
'ACTIVATION(No)' AS fake_column
FROM clixsteraccount right join all_months on all_months.a_month_id = date_format(clixsteraccount.rem_joindate,'%m') and
(clixsteraccount.rem_registrationtype = 'Normal')and(clixsteraccount.rem_kapowstatus='pending' or clixsteraccount.rem_kapowstatus='success')
GROUP BY year,month
HAVING month BETWEEN month(date_sub($P{date},interval 2 month)) and month($P{date})
So, what i do is create a table with two fields, a_month_id(1,2,3...,12) and a_month(name of months). Sql above does give me what i want which is to display previous 3 months even the months before is not exist.
exp: data start on July. So, i want to display May,June and July data like 0,0,100.
The problem occur when it comes to next months or next year. When i try to generate sql based on parameter on Jan, it doesn't work like i thought. I do realize the problem are with 'Having' condition. Do anyone have idea how to improvised this sql to make it continue generate in the next,next year.
Thank you in advanced.
OK, I will make a few suggestions and give you an answer that will work on SQL Server - you will need to make any translations yourself.
I note that your query will aggregate all years together, i.e. Dec 2012 + Dec 2013 + Dec 2014 etc. Based on your question I don't think that is your intention so I will keep each distinct. You can change the query if that was your intention. I have also not included your selection criteria (other than by the month).
I suggest that you utilize an index table. This is a table stored in your database (or the master database if possible) with an clustered indexed integer column running from 0 to n where n is a sufficiently large number - 10,000 will be more than enough for this application (there are 12 months in a year so 10,000 represents 833 years). This table is so useful everyone should have one.
SELECT DATEADD(month, it.id, 0 ) AS month
,ISNULL(COUNT(clixsteraccount.rem_registerbycn), 0) AS registration
,'REGISTRATION(No)' AS fake_column
FROM cn
INNER JOIN ON ca.rem_registerbycn = cn.cn_id
clixsteraccount ca
RIGHT JOIN
IndexTable it ON it.id = DATEDIFF(month, 0, clixsteraccount.rem_joindate)
WHERE it.id BETWEEN DATEDIFF(month, 0, #StartDate) - 3 AND DATEDIFF(month, 0, GETDATE())
GROUP BY it.id
The way it works is by converting the clixsteraccount.rem_joindate to an integer that represents the number of months since date 0 (01-01-1900 in SQL Server). This is then matched to the id column of the IndexTable and limited by the dates you select. Because every number exists in the index table and we are using an outer join it doesn't matter if there are months missing from your data.

Adjust date column for change over time

This is an easy enough problem, but wondering if anyone can provide a more elegant solution.
I've got a table that consists of a date column (month end dates over time) and several value columns--say the price on a variety of stocks over time, one column for each stock. I'd like to calculate the change in value columns for each period represented in the date column (eg, a daily return from a table filled with prices).
My current plan is to join the table to itself and simply create a new column for the return as ret = b.price/a.price - 1. Code as follows:
select Date, Ret = (b.stock1/a.stock1 - 1)
from #temp a, #temp b
where datediff(day, a.Date,b.Date) between 25 and 35
order by a.Date
This works fine, BUT:
(1) I need to do this for, say, dozens of stocks--is there a good way to replicate the calculation without copying and pasting the return calculation and replacing 'stock1' with each other stock name?
(2) Is there a better way to do this join? I'm effectively doing a cross join at this point and only keeping entries that are adjacent (as defined by the datediff and range), but wondering if there's a better way to join a table like this to itself.
EDIT: Per request, data is in the form (my data has multiple price columns though):
Date Price
7/1/1996 349.22
7/31/1996 337.72
8/30/1996 343.70
9/30/1996 357.23
10/31/1996 364.07
11/29/1996 385.04
12/31/1996 383.68
And from that, I'd like to calculate return, to generate a table like this (again, with additional columns for the extra price columns that exist in the actual table):
Date Ret
7/31/1996 -0.03
8/30/1996 0.02
9/30/1996 0.04
10/31/1996 0.02
11/29/1996 0.06
12/31/1996 0.00
I would do the following. First, use the month and year to do the self join. I woudl recommend you take the year * 12 + the month number to get a unique value for each month and year combination. So, Jan of 2011 would have a value of (2011 * 12 + 1 = 24133) and December of 2010 would have a value of (2010 * 12 + 12 = 24132). This will allow you to accurately compare months without having to mess with rolling over from December to January. Next, you need to supply the calculations in the select clause. If you have the stock values in different columns then you will have to type them out as a.stock1-b.stock1, a.stock2-b.stock2, etc. The only way around that would be to massage the data to where there is only one stock value column and add a stockname column that would identify what stock that value is for.
Using the Month and Year for the self join, the following query should work:
select Date, Ret = (b.stock1/a.stock1 - 1)
from #temp a
inner join #temp b on (YEAR(a.Date) * 12) + MONTH(a.Date) = (YEAR(b.Date) * 12) + MONTH(b.Date) + 1
order by a.Date

SQL complex view for virtually showing data

I have a table with the following table.
----------------------------------
Hour Location Stock
----------------------------------
6 2000 20
9 2000 24
----------------------------------
So this shows stock against some of the hours in which there is a change in the quantity.
Now my requirement is to create a view on this table which will virtually show the data (if stock is not htere for a particular hour). So the data that should be shown is
----------------------------------
Hour Location Stock
----------------------------------
6 2000 20
7 2000 20 -- same as hour 6 stock
8 2000 20 -- same as hour 6 stock
9 2000 24
----------------------------------
That means even if the data is not there for some particular hour then we should show the last hour's stock which is having stock. And i have another table with all the available hours from 1-23 in a column.
I have tried partition over by method as given below. But i think i am missing some thing around this to get my requirement done.
SELECT
HOUR_NUMBER,
CASE WHEN TOTAL_STOCK IS NULL
THEN SUM(TOTAL_STOCK)
OVER (
PARTITION BY LOCATION
ORDER BY CURRENT_HOUR ROWS 1 PRECEDING
)
ELSE
TOTAL_STOCK
END AS FULL_STOCK
FROM
(
SELECT HOUR_NUMBER AS HOUR_NUMBER
FROM HOURS_TABLE -- REFEERENCE TABLE WITH HOURS FROM 1-23
GROUP BY 1
) HOURS_REF
LEFT OUTER JOIN
(
SEL CURRENT_HOUR AS CURRENT_HOUR
, STOCK AS TOTAL_STOCK
,LOCATION AS LOCATION
FROM STOCK_TABLE
WHERE STOCK<>0
) STOCKS
ON HOURS_REF.HOUR_NUMBER = STOCKS.CURRENT_HOUR
This query is giving all the hours with stock as null for the hours without data.
We are looking at ANSI sql solution so that it can be used on databases like Teradata.
I am thinking that i am using partition over by wrongly or is there any other way. We tried with CASE WHEN but that needs some kind of looping to check back for an hour with some stock.
I've run into similar problems before. It's often simpler to make sure that the data you need somehow gets into the database in the first place. You might be able to automate it with a stored procedure that runs periodically.
Having said that, did you consider trying COALESCE() with a scalar subquery? (Or whatever similar function your dbms supports.) I'd try it myself and post the SQL, but I'm leaving for work in two minutes.
Haven't tried, but along the lines of what Mike said:
SELECT a.hour
, COALESCE( a.stock
, ( select b.stock
from tbl.b
where b.hour=a.hour-1 )
) "stock"
FROM tbl a
Note: this will impact performance greatly.
Thanks for your responses. I have tried out RECURSIVE VIEW for the above requirement and is giving correct results (I am fearing about the CPU usage for big tables as it is recursive). So here is stock table
----------------------------------
Hour Location Stock
----------------------------------
6 2000 20
9 2000 24
----------------------------------
Then we will have a view on this table which will give all 12 hours data using Left outer join.
----------------------------------
Hour Location Stock
----------------------------------
6 2000 20
7 2000 NULL
8 2000 NULL
9 2000 24
----------------------------------
Then we will have a recursive view which joins the table recursively with the same view to get the Stock of each hour moved one hour up and appended with level of data coming incremented.
REPLACE RECURSIVE VIEW HOURLY_STOCK_VIEW
(HOUR_NUMBER,LOCATION, STOCK, LVL)
AS
(
SELECT
HOUR_NUMBER,
LOCATION,
STOCK,
1 AS LVL
FROM STOCK_VIEW_WITH_LEFT_OUTER_JOIN
UNION ALL
SELECT
STK.HOUR_NUMBER,
THE_VIEW.LOCATION,
THE_VIEW.STOCK,
LVL+1 AS LVL
FROM STOCK_VIEW_WITH_LEFT_OUTER_JOIN STK
JOIN
HOURLY_STOCK_VIEW THE_VIEW
ON THE_VIEW.HOUR_NUMBER = STK.HOUR_NUMBER -1
WHERE LVL <=12
)
;
You can observe that first we select from the Left outer joined view then we union it with the left outer join view joined on the same view which we are creating and giving it its level at which data is coming.
Then we select the data from this view with the minimum level.
SEL * FROM HOURLY_STOCK_VIEW
WHERE
(
HOUR_NUMBER,
LVL
)
IN
(
SEL
HOUR_NUMBER,
MIN(LVL)
FROM HOURLY_STOCK_VIEW
WHERE STOCK IS NOT NULL
GROUP BY 1
)
;
This is working fine and giving the result as
----------------------------------
Hour Location Stock
----------------------------------
6 2000 20
7 2000 20 -- same as hour 6 stock
8 2000 20 -- same as hour 6 stock
9 2000 24
10 2000 24
11 2000 24
12 2000 24
----------------------------------
I know this is going to take huge CPU for large tables to get the recursion work ( we are limiting the recursion to only 12 levels as 12 hours data is needed to stop it go into infinite loop). But I thought some body can use this for some kind of Hierarchy building. I will look for some more responses from you guys on any other approaches available. Thanks. You can have a look at Recursive views in the below link for teradata.
http://forums.teradata.com/forum/database/recursion-in-a-stored-procedure
The most common uses of view is, the removal of complexity.
For example:
CREATE VIEW FEESTUDENT
AS
SELECT S.NAME,F.AMOUNT FROM STUDENT AS S
INNER JOIN FEEPAID AS F ON S.TKNO=F.TKNO
Now do a SELECT:
SELECT * FROM FEESTUDENT