I have a table table1
ITEM_CODE DESC MONTH DAY01 DAY02 DAY03
FG0050BCYL0000CD CYL HEAD FEB-15 0 204 408
FG00186CYL0000CD POWER UNIT FEB-15 425 123 202
I want to insert data in another table table2 from table1 in such a way.
ITEM_CODE MONTH DATE QUANTITY
FG0050BCYL0000CD FEB-15 01-FEB-2015 0
FG0050BCYL0000CD FEB-15 02-FEB-2015 204
FG0050BCYL0000CD FEB-15 03-FEB-2015 408
FG00186CYL0000CD FEB-15 01-FEB-2015 425
FG00186CYL0000CD FEB-15 02-FEB-2015 123
FG00186CYL0000CD FEB-15 03-FEB-2015 202
Please tell me how to achieve this via pl sql
This SQL query worked for me.
with items as (
select table1.*,
to_date(month||'-01', 'MON-YY-DD', 'NLS_DATE_LANGUAGE=American') day
from table1)
select item_code, month, day + lvl - 1 day,
case extract(Day from day + lvl - 1)
when 1 then day01
when 2 then day02
when 3 then day03
-- <- insert rest (day04...day30) here
when 31 then day31
end value
from items
join (select level lvl from dual connect by level<32) n
on day + lvl - 1 <= last_day(day)
Subquery items attaches first day of month to data. Next I join this subquery with other, hierarchical subquery,
which gives simple list of 31 numbers (form 1 to 31). Join is constructed this way that date cannot exceed last day of month.
So for each row in table1 we have 28, 29, 30 or 31 rows with proper dates.
Now simple, but tedious task - for each day we have to get value from proper column; we need case here.
In solution these are four rows, but you will need to complete rest.
At the end just insert results into table2.
The following should get you close:
BEGIN
FOR aRow IN (SELECT * FROM TABLE1)
LOOP
INSERT INTO TABLE2(ITEM_CODE, MONTH, "DATE", QUANTITY)
VALUES (aRow.ITEM_CODE, aRow.MONTH,
TO_DATE(aRow.MONTH, 'MON-RR')+0, aRow.DAY01);
INSERT INTO TABLE2(ITEM_CODE, MONTH, "DATE", QUANTITY)
VALUES (aRow.ITEM_CODE, aRow.MONTH,
TO_DATE(aRow.MONTH, 'MON-RR')+1, aRow.DAY02);
INSERT INTO TABLE2(ITEM_CODE, MONTH, "DATE", QUANTITY)
VALUES (aRow.ITEM_CODE, aRow.MONTH,
TO_DATE(aRow.MONTH, 'MON-RR')+2, aRow.DAY03);
END LOOP;
END;
Note that the column names DESC and DATE are both reserved words in Oracle, which requires that they be quoted as shown above. It would be simpler to use different names, such as DESCRIPTION and ACTIVITY_DATE, to eliminate the need to quote these names every time they're used.
Best of luck.
Related
I have a slowly changing type 2 price change table which I need to reduce the size of to improve performance. Often rows are written to the table even if no price change occurred (when some other dimensional field changed) and the result is that for any product the table could be 3-10x the size it needs to be if it were including only changes in price.
I'd like to compress the table so that it only has contains the first effective date and last expiration date for each price until that price changes that can also
Deal with an unknown number of rows of the same price
Deal with products going back to an old price
As an example if i have this raw data:
Product
Price Effective Date
Price Expiration Date
Price
123456
6/22/18
9/19/18
120
123456
9/20/18
11/8/18
120
123456
11/9/18
11/29/18
120
123456
11/30/18
12/6/18
120
123456
12/7/18
12/19/18
85
123456
12/20/18
1/1/19
85
123456
1/2/19
2/19/19
85
123456
2/20/19
2/20/19
120
123456
2/21/19
3/19/19
85
123456
3/20/19
5/22/19
85
123456
5/23/19
10/10/19
85
123456
10/11/19
6/19/19
80
123456
6/20/20
12/31/99
80
I need to transform it into this:
Product
Price Effective Date
Price Expiration Date
Price
123456
6/22/18
12/6/18
120
123456
12/7/18
2/19/19
85
123456
2/20/19
2/20/19
120
123456
2/21/19
10/10/19
85
123456
10/11/19
12/31/99
80
You can first find the intervals where the price does not change, and then group on those intervals:
with to_r as (select row_number() over (order by (select 1)) r, t.* from data_table t),
to_group as (select t.*, (select sum(t1.r < t.r and t1.price != t.price) from to_r t1) c from to_r t)
select t.product, min(t.effective), max(t.expiration), max(t.price) from to_group t group by t.c order by t.r;
Output:
Product
Price Effective Date
Price Expiration Date
Price
123456
6/22/18
12/6/18
120
123456
12/7/18
2/19/19
85
123456
2/20/19
2/20/19
120
123456
2/21/19
10/10/19
85
123456
10/11/19
12/31/99
80
This is a type of gaps-and-islands problem. I would recommend reconstructing the data, saving it in a temporary table, and then reloading the existing table.
The code to reconstruct the data is:
select product, price, min(effective_date), max(expiration_date)
from (select t.*,
sum(case when prev_expiration_date = effective_date - interval '1 day' then 0 else 1 end) over (partition by product order by effective_date) as grp
from (select t.*,
lag(expiration_date) over (partition by product, price order by effective_date) as prev_expiration_date
from t
) t
) t
group by product, price, grp;
Note that the logic for date arithmetic varies depending on the database.
Save this result into a temporary table, temp_t or whatever, using select into, create table as, or whatever your database supports.
Then empty the current table and reload it:
truncate table t;
insert into t
select product, price, effective_date, expiration_date
from temp_t;
Notes:
Validate the data before using truncate_table!
If there are triggers or columns with default values, you might want to be careful.
It sounds like you are asking for a temporal schema? Where for a given date you can know the price of an asset?
This is done with two tables; price_current and price_history.
price_id
item_id
price
rec_created
1
1
100
'2015-04-18'
price_id
item_id
from
to
price
1
1
'2001-01-01'
'2004-05-01'
114
1
1
'2004-05-01'
'2015-04-18'
102
i.e. for any item, you can ascertain the date it was set without polluting your "current" table. For this to work effectively you will need to have UPDATE triggers on your current_table. When you update a record you insert into the history table the details and the period it was valid from.
CREATE OR REPLACE TRIGGER trg_price_current_update
AS
BEGIN
INSERT INTO price_history(price_id, item_id, from, to, price)
SELECT price_id, item_id, rec_created, GETDATE(), price
FROM rows_updated
END
Now you have a distinction between current and historical, without your current table (presumably the busier table) getting out of hand because of maintaining historical state. Hope i understood the question.
To ignore 'dummy' updates, just alter the trigger to ignore empty changes (if that's not handled by the DBMS anyway). Tbh, this should and could be done application side easily enough, but to manage it via the trigger:
CREATE OR REPLACE TRIGGER trg_price_current_update
AS
BEGIN
INSERT INTO price_history(price_id, item_id, from, to, price)
SELECT price_id, item_id, rec_created, GETDATE(), price
FROM rows_updated u
INNER JOIN price_current ON u.price_id = p.price_id
WHERE u.price <> p.price
END
i.e. rows_updated contains the record from the update, we insert into the history table the previous row, providing the previous row's price is different from the current row's price.
(edited to include new trigger. I also changed the date held in rec_created, this must be the date the row is created, not the first instance that product had a price assigned to it. that was a mistake. Regarding the dates, I am lazy to put the full DD-MM-YYYY hh:mm:ss:zzz, but that would generally be useful in between queries)
What you are asking for is a versioning system. Many RDBMS platforms implement support for this out of the box (it's a SQL standard), which may be suitable, depending on your requirements.
You have not tagged a specific platform so it's not possible to be specific to your situation. I use the concept of system versioning regularly in MS Sql Server, where you would implement it thus:
assuming schema "history" exists,
alter table dbo.MyTable add
ValidFrom datetime2 generated always as row start hidden constraint DF_MyTableSysStart default sysutcdatetime(),
ValidTo datetime2 generated always as row end hidden constraint DF_MyTableSysEnd default convert(datetime2, '9999-12-31 23:59:59.9999999'),
period for system_time (ValidFrom, ValidTo);
end
alter table MyTable set (system_versioning = on (history_table = History.MyTable));
create clustered index ix_MyTable on History.MyTable (ValidTo, ValidFrom) with (data_compression = page,drop_existing=on) on History;
A number of syntax extensions exist to aid querying the temporal data for example to find historical data at a point in time.
Alternatively, to utilise a single table but handle the duplication, you could create an instead of trigger.
the idea here is that the trigger gets to intercept the data before it is inserted, where you can check to see of the value is different to the last value and discard or insert as appropriate.
something along the lines of:
WITH keeps AS
(
SELECT p.product_id, p.effective, p.expires, p.price, CASE WHEN EXISTS(SELECT 1 FROM prices p1 WHERE p1.effective = DATEADD(DAY, p.exires, 1) AND p1.price <> p.price) THEN 1 ELSE 0 END AS has_after, CASE WHEN EXISTS(SELECT 1 FROM prices p1 WHERE p1.expires = DATEADD(DAY, p.effective, -1) AND p1.price <> p.price) THEN 1 ELSE 0 END AS has_before
FROM prices p
)
SELECT * FROM keeps
WHERE has_after = 1
OR has_before = 1
UNION ALL
SELECT p.product_id, p.effective, p.exires, p.price
FROM prices p
WHERE p.effective = (SELECT MIN(effective) FROM prices p1 WHERE p1.product_id = p.product_id)
What's it doing:
Find all the entries where there exists another entry whose effective date is that of the previous entry's expiry date + 1, and the price of that new entry is different. This gives us all the actual changes in price. But we miss the first price entry, so we simply include that in the results.
e.g.:
product_id
effective
expires
price
has_before
has_after
123456
6/22/18
9/19/18
120
0
0
123456
9/20/18
11/8/18
120
0
0
123456
11/9/18
11/29/18
120
0
0
123456
11/30/18
12/6/18
120
0
1
123456
12/7/18
12/19/18
85
1
0
123456
12/20/18
1/1/19
85
0
0
123456
2/1/19
2/19/19
85
0
1
123456
2/20/19
2/20/19
120
1
1
123456
2/21/19
3/19/19
85
1
0
I would like to take a set of data and expand it by adding date rows based an existing field. For instance, If I have the following table (TABLE1):
ID NAME YEAR
1 John 2001
2 Jim 2012
3 Sally 2005
I want to take this data and put it into another table but expand it to include a set of months (and from there I can add monthly information). If I just look at the first record (John) my result would be:
ID NAME YEAR MONTH
1 John 2001 01-JAN-2001
1 John 2001 01-FEB-2001
1 John 2001 01-MAR-2001
...
1 John 2001 01-DEC-2001
I have the mechanism to derive my monthly dates but how do I extract the data from TABLE1 to make TABLE2. Here is just a quick query but, of course, I get the ORA-01427 single-row subquery returns more than one row as expect. Just not sure how to organize the query to put these two pieces together:
select id,
name,
year,
book_cd,
(SELECT ADD_MONTHS('01-JAN-'|| year, LEVEL - 1)
FROM DUAL CONNECT BY LEVEL <= 12) month
from table1 ;
I realize I cant do this but I'm not sure how to put the two pieces together. I plan to bulk process records so it wont be one ID at a time Thanks for the help.
You can use a cross join:
select t.id,
t.name,
t.year,
t.book_cd,
ADD_MONTHS(to_date(t.year || '-01-01', 'YYYY-MM-DD'), m.rn) as mnth
from table1 t
cross join (select rownum - 1 as rn
from dual
connect by rownum <= 12) m
I have two tables as follows:
Table 1
Columns - oppproductid, SKU, Price, Quantity, Date
Values - PR1, ABCSKU1, 1000,500, 10/2013
Table 2
Columns - opproductid, month_1, Month_2, Month_3, Month_4...Month_36
Values - PR1, 200, 100, NULL, 200...
The tables are 1-1. I need to get one row for each value in the month column that is not null for each record and calculate the date based on the months that are not null assuming that Month_1 is the date column in the primary table so the ideal result set based on the sample values is:
oppproductid SKU Price Quantity Date Deployment
PR1 ABCSKU1 1000 500 10/2013 200
PR1 ABCSKU1 1000 500 11/2013 100
PR1 ABCSKU1 1000 500 1/2014 200
NOTES:
Month_3 is NULL so 12/2013 does not yield results.
There are 36 months in the second table with the only requirement that one has to contain data.
Month_1 always equals the date on the first table.
Any help is appreciated.
Store your data using the proper data types. Dates should be date fields.
Normalise your data structures to make querying easier.
Try this
.
set dateformat dmy
select
t1.oppproductid,
t1.SKU,
t1.Price,
t1.Quantity,
dateadd(month, monthno-1, convert(date, '1/' + [date])),
deployment
from table1 t1
inner join
(
select *, convert(int,substring(mth,7,2)) as monthno from table2
unpivot (deployment for mth in (month_1,month_2,month_3,month_4...)) u
) u2
on t1.oppproductid = u2.opproductid
I'm struggling to find the query for the following task
I have the following data and want to find the total network day for each unique ID
ID From To NetworkDay
1 03-Sep-12 07-Sep-12 5
1 03-Sep-12 04-Sep-12 2
1 05-Sep-12 06-Sep-12 2
1 06-Sep-12 12-Sep-12 5
1 31-Aug-12 04-Sep-12 3
2 04-Sep-12 06-Sep-12 3
2 11-Sep-12 13-Sep-12 3
2 05-Sep-12 08-Sep-12 3
Problem is the date range can be overlapping and I can't come up with SQL that will give me the following results
ID From To NetworkDay
1 31-Aug-12 12-Sep-12 9
2 04-Sep-12 08-Sep-12 4
2 11-Sep-12 13-Sep-12 3
and then
ID Total Network Day
1 9
2 7
In case the network day calculation is not possible just get to the second table would be sufficient.
Hope my question is clear
We can use Oracle Analytics, namely the "OVER ... PARTITION BY" clause, in Oracle to do this. The PARTITION BY clause is kind of like a GROUP BY but without the aggregation part. That means we can group rows together (i.e. partition them) and them perform an operation on them as separate groups. As we operate on each row we can then access the columns of the previous row above. This is the feature PARTITION BY gives us. (PARTITION BY is not related to partitioning of a table for performance.)
So then how do we output the non-overlapping dates? We first order the query based on the (ID,DFROM) fields, then we use the ID field to make our partitions (row groups). We then test the previous row's TO value and the current rows FROM value for overlap using an expression like: (in pseudo code)
max(previous.DTO, current.DFROM) as DFROM
This basic expression will return the original DFROM value if it doesnt overlap, but will return the previous TO value if there is overlap. Since our rows are ordered we only need to be concerned with the last row. In cases where a previous row completely overlaps the current row we want the row then to have a 'zero' date range. So we do the same thing for the DTO field to get:
max(previous.DTO, current.DFROM) as DFROM, max(previous.DTO, current.DTO) as DTO
Once we have generated the new results set with the adjusted DFROM and DTO values, we can aggregate them up and count the range intervals of DFROM and DTO.
Be aware that most date calculations in database are not inclusive such as your data is. So something like DATEDIFF(dto,dfrom) will not include the day dto actually refers to, so we will want to adjust dto up a day first.
I dont have access to an Oracle server anymore but I know this is possible with the Oracle Analytics. The query should go something like this:
(Please update my post if you get this to work.)
SELECT id,
max(dfrom, LAST_VALUE(dto) OVER (PARTITION BY id ORDER BY dfrom) ) as dfrom,
max(dto, LAST_VALUE(dto) OVER (PARTITION BY id ORDER BY dfrom) ) as dto
from (
select id, dfrom, dto+1 as dto from my_sample -- adjust the table so that dto becomes non-inclusive
order by id, dfrom
) sample;
The secret here is the LAST_VALUE(dto) OVER (PARTITION BY id ORDER BY dfrom) expression which returns the value previous to the current row.
So this query should output new dfrom/dto values which dont overlap. It's then a simple matter of sub-querying this doing (dto-dfrom) and sum the totals.
Using MySQL
I did haves access to a mysql server so I did get it working there. MySQL doesnt have results partitioning (Analytics) like Oracle so we have to use result set variables. This means we use #var:=xxx type expressions to remember the last date value and adjust the dfrom/dto according. Same algorithm just a little longer and more complex syntax. We also have to forget the last date value any time the ID field changes!
So here is the sample table (same values you have):
create table sample(id int, dfrom date, dto date, networkDay int);
insert into sample values
(1,'2012-09-03','2012-09-07',5),
(1,'2012-09-03','2012-09-04',2),
(1,'2012-09-05','2012-09-06',2),
(1,'2012-09-06','2012-09-12',5),
(1,'2012-08-31','2012-09-04',3),
(2,'2012-09-04','2012-09-06',3),
(2,'2012-09-11','2012-09-13',3),
(2,'2012-09-05','2012-09-08',3);
On to the query, we output the un-grouped result set like above:
The variable #ld is "last date", and the variable #lid is "last id". Anytime #lid changes, we reset #ld to null. FYI In mysql the := operators is where the assignment happens, an = operator is just equals.
This is a 3 level query, but it could be reduced to 2. I went with an extra outer query to keep things more readable. The inner most query is simple and it adjusts the dto column to be non-inclusive and does the proper row ordering. The middle query does the adjustment of the dfrom/dto values to make them non-overlapped. The outer query simple drops the non-used fields, and calculate the interval range.
set #ldt=null, #lid=null;
select id, no_dfrom as dfrom, no_dto as dto, datediff(no_dto, no_dfrom) as days from (
select if(#lid=id,#ldt,#ldt:=null) as last, dfrom, dto, if(#ldt>=dfrom,#ldt,dfrom) as no_dfrom, if(#ldt>=dto,#ldt,dto) as no_dto, #ldt:=if(#ldt>=dto,#ldt,dto), #lid:=id as id,
datediff(dto, dfrom) as overlapped_days
from (select id, dfrom, dto + INTERVAL 1 DAY as dto from sample order by id, dfrom) as sample
) as nonoverlapped
order by id, dfrom;
The above query gives the results (notice dfrom/dto are non-overlapping here):
+------+------------+------------+------+
| id | dfrom | dto | days |
+------+------------+------------+------+
| 1 | 2012-08-31 | 2012-09-05 | 5 |
| 1 | 2012-09-05 | 2012-09-08 | 3 |
| 1 | 2012-09-08 | 2012-09-08 | 0 |
| 1 | 2012-09-08 | 2012-09-08 | 0 |
| 1 | 2012-09-08 | 2012-09-13 | 5 |
| 2 | 2012-09-04 | 2012-09-07 | 3 |
| 2 | 2012-09-07 | 2012-09-09 | 2 |
| 2 | 2012-09-11 | 2012-09-14 | 3 |
+------+------------+------------+------+
How about constructing an SQL which merges intervals by removing holes and considering only maximum intervals. It goes like this (not tested):
SELECT DISTINCT F.ID, F.From, L.To
FROM Temp AS F, Temp AS L
WHERE F.From < L.To AND F.ID = L.ID
AND NOT EXISTS (SELECT *
FROM Temp AS T
WHERE T.ID = F.ID
AND F.From < T.From AND T.From < L.To
AND NOT EXISTS ( SELECT *
FROM Temp AS T1
WHERE T1.ID = F.ID
AND T1.From < T.From
AND T.From <= T1.To)
)
AND NOT EXISTS (SELECT *
FROM Temp AS T2
WHERE T2.ID = F.ID
AND (
(T2.From < F.From AND F.From <= T2.To)
OR (T2.From < L.To AND L.To < T2.To)
)
)
with t_data as (
select 1 as id,
to_date('03-sep-12','dd-mon-yy') as start_date,
to_date('07-sep-12','dd-mon-yy') as end_date from dual
union all
select 1,
to_date('03-sep-12','dd-mon-yy'),
to_date('04-sep-12','dd-mon-yy') from dual
union all
select 1,
to_date('05-sep-12','dd-mon-yy'),
to_date('06-sep-12','dd-mon-yy') from dual
union all
select 1,
to_date('06-sep-12','dd-mon-yy'),
to_date('12-sep-12','dd-mon-yy') from dual
union all
select 1,
to_date('31-aug-12','dd-mon-yy'),
to_date('04-sep-12','dd-mon-yy') from dual
union all
select 2,
to_date('04-sep-12','dd-mon-yy'),
to_date('06-sep-12','dd-mon-yy') from dual
union all
select 2,
to_date('11-sep-12','dd-mon-yy'),
to_date('13-sep-12','dd-mon-yy') from dual
union all
select 2,
to_date('05-sep-12','dd-mon-yy'),
to_date('08-sep-12','dd-mon-yy') from dual
),
t_holidays as (
select to_date('01-jan-12','dd-mon-yy') as holiday
from dual
),
t_data_rn as (
select rownum as rn, t_data.* from t_data
),
t_model as (
select distinct id,
start_date
from t_data_rn
model
partition by (rn, id)
dimension by (0 as i)
measures(start_date, end_date)
rules
( start_date[for i
from 1
to end_date[0]-start_date[0]
increment 1] = start_date[0] + cv(i),
end_date[any] = start_date[cv()] + 1
)
order by 1,2
),
t_network_days as (
select t_model.*,
case when
mod(to_char(start_date, 'j'), 7) + 1 in (6, 7)
or t_holidays.holiday is not null
then 0 else 1
end as working_day
from t_model
left outer join t_holidays
on t_holidays.holiday = t_model.start_date
)
select id,
sum(working_day) as network_days
from t_network_days
group by id;
t_data - your initial data
t_holidays - contains list of holidays
t_data_rn - just adds unique key (rownum) to each row of t_data
t_model - expands t_data date ranges into a flat list of dates
t_network_days - marks each date from t_model as working day or weekend based on day of week (Sat and Sun) and holidays list
final query - calculates number of network day per each group.
This is SQL Server 2000 so I don't have any windowing functions (row_number).
I have a table emp_data :
emp_id datime miles gallons
23148 2011-08-21 02:00 32 3
23148 2011-08-21 09:00 38 4
23148 2011-08-21 11:00 40 5
42938 2011-08-20 03:00 23 1
42938 2011-08-22 08:00 53 13
Each row is cumulative (running?) from the previous one.
I need to get the number of miles driven by the employee, which I do by subtracting the miles for the earliest date minus the miles for the latest date. (40-32 = 8 miles driven for empid=23148). I need to do this for gallons too.
I need to calculate miles per gallon for each driver.
The end result should be this:
emp_id miles gallons
23148 8 2
42938 30 12
Doing it for multiple drivers is where I'm stuck. In SQL Server 2005, I could probably do row_number partition_by, but don't know what to do in SQL Server 2000. I've done something like this for one driver. Won't work partitioned by drivers. Had to use identity() in place of row_number.
SELECT IDENTITY(int) as id, emp_id, datime, miles, gallons
into #t1
FROM emp_data
where
emp_id='18018'
and datime >= '20110820 02:00'
and datime <= '20110827 02:00'
ORDER BY datime
select foo1.emp_id,foo2.miles - foo1.miles as miles_driven,
foo2.gallons - foo2.gallons as gallons_used
from (
SELECT *
FROM #t1
where id = 1) foo1
CROSS JOIN (
SELECT *
from #t1
where id = (select max(id) from #t1 t)
) foo2
I do have a linked server to the SQL Server 2000 db from SQL Server 2008 so I'm thinking of getting the data and then processing there, but there's about 1 million records for just one week. I might need to do this for YTD.
Let me know if something is unclear. Sorry I don't have any sample data.
I think this is what you're looking for (no need for temp tables) but only use if you have additional data constraints that we're not seeing. Otherwise go with smdrager answer
SELECT minmaxdate.emp_id,
LAST.miles - FIRST.miles,
LAST.gallons - FIRST.gallons
FROM (SELECT emp_id,
MIN(datime) firstdate,
MAX(datime) lastdate
FROM emp_data
GROUP BY emp_id) minmaxdate
INNER JOIN emp_data FIRST
ON FIRST.emp_id = minmaxdate.emp_id
AND FIRST.datime = minmaxdate.firstdate
INNER JOIN emp_data LAST
ON FIRST.emp_id = minmaxdate.emp_id
AND LAST.datime = minmaxdate.lastdate
This should do it.
SELECT
emp_id,
MAX(miles) - MIN(miles) AS miles_driven,
MAX(gallons) - MIN(gallons) AS gallons_used
FROM emp_data
GROUP BY emp_id
Hope this helps.