How can increase select query row with for loop in Oracle? - sql

I have a query that I pull with select and returns one line at a time.
I wanted this query to write two row to the declared v_output_piece_table by bulk collect it twice with the for loop.But I saw that it wrote a single row in the v_output_piece_table.
I want it to rotate two rows now depending on the situation inside he for loop but this will depend on a variable in future.
v_output_piece_table tbl_met_output_coil;
begin
FOR sayac IN 1..2
loop
SELECTSUBSTR (sl.task_job_id, 1, 12) AS schedule_id,
DENSE_RANK () OVER (ORDER BY sc.seq) AS coil_seq,
round(p.ACTUAL_WEIGHT/3,3)
AS weight,
scd.so_id,
scd.so_line_id
BULK COLLECT
INTO v_output_piece_table
FROM sch_line sl,
sch_input_material sim,
sch_input_piece sip,
sch_output_material som,
sch_cut sc,
sch_cut_detail scd,
piece P
WHERE sl.task_job_id = 180078
AND sl.sch_line_num_id = sim.sch_line_num_id
AND sl.sch_line_num_id = som.sch_line_num_id
AND som.output_mat_num_id = sc.output_mat_num_id
AND sc.schc_cut_num_id = scd.schc_cut_num_id
AND sim.input_mat_num_id = sip.input_mat_num_id
AND sip.piece_num_id = P.piece_num_id
ORDER BY sl.seq, sim.seq, sip.seq;
END LOOP;
end;
QUERY output :
|SCHEDULE_ID| |L3_OUTPUT_CNT| |EN_COIL_ID| |COIL_SEQ| |WEIGHT| |SO_ID| |SO_LINE_ID|
| 180078 | | 1 | | 21TT | | 1 | |39663 | | 2 | | 3 |
What I want:
|SCHEDULE_ID| |EN_COIL_ID| |COIL_SEQ| |WEIGHT| |SO_ID| |SO_LINE_ID|
| 180078 | | 21TT | | 1 | |39663 | | 2 | | 3 |
| 180078 | | 21TT | | 2 | |39663 | | 2 | | 3 |
How can I get the output I want ?

This is really easy to do if you cross join your query with a 2 line query:
WITH your_query AS (SELECT SUBSTR (sl.task_job_id, 1, 12) AS schedule_id,
round(p.ACTUAL_WEIGHT/3,3) AS weight,
scd.so_id,
scd.so_line_id,
sl.seq sl_seq,
sim.seq sim_seq,
sip.seq sip_seq
BULK COLLECT
INTO v_output_piece_table
FROM sch_line sl,
sch_input_material sim,
sch_input_piece sip,
sch_output_material som,
sch_cut sc,
sch_cut_detail scd,
piece P
WHERE sl.task_job_id = 180078
AND sl.sch_line_num_id = sim.sch_line_num_id
AND sl.sch_line_num_id = som.sch_line_num_id
AND som.output_mat_num_id = sc.output_mat_num_id
AND sc.schc_cut_num_id = scd.schc_cut_num_id
AND sim.input_mat_num_id = sip.input_mat_num_id
AND sip.piece_num_id = P.piece_num_id),
dummy AS (SELECT LEVEL ID
FROM dual
CONNECT BY LEVEL <= 2)
SELECT yt.schedule_id,
d.id coil_seq,
yt.weight,
yt.so_id,
yt.so_line_id
FROM your_query yt
CROSS JOIN dummy d
ORDER BY yt.sl_seq,
yt.sim_seq,
yt.sip_seq,
d.id;
The dual table is a special table that only contains one row and one column, and so you can use it to generate rows. You could have simply union'd two rows together in the dummy subquery, e.g.:
dummy AS (SELECT 1 ID FROM dual
UNION ALL
SELECT 2 ID FROM dual)
but I prefer using the hierarchical trick using connect by since it's easy to amend if in the future you need to triplicate (or more!) the rows.

Related

How to integrate over segments using SQL

I have a table with columns t_b; t_e; x were [t_b, t_e) denotes a period during which x resources where used. I want to compute a table were for each hour h I have amount of resources that where used during [h, h+1) period.
So far my only idea was to generate multiple rows from each input row for each hour (I use an extension of SQL with UDFs) and then simply group by by hour, but I'm afraid this may be too slow considering large amount of data at hand.
Say for example I have a table with two rows:
+-----+-----+---+
| t_b | t_e | x |
+-----+-----+---+
| 1 | 3.5 | a |
| 0.5 | 4 | b |
+-----+-----+---+
Then resulting table should be:
+---+-------------+
| h | x |
+---+-------------+
| 0 | 0*a + 0.5*b |
| 1 | 1*a + 1*b |
| 2 | 1*a + 1*b |
| 3 | 0.5*a + 1*b |
+---+-------------+
You can have a trigger on insert into the stats table that also adds to the aggregate table (the per-hour sums).
If you also need to convert the existing data, you need to run over every row of your current table, split it into amounts/hours and add to the aggregate table.
This is an sql-server example for all number columns
with h as (
-- your hours tally here
select top(24) row_number() over(order by (select null)) eoh from sys.all_objects
), myTable as (
select 1 t_b, 3.5 t_e, 20 v union all
select 0.5, 4, 40
)
select eoh-1 h_starth
, sum(v * (case when t_e < eoh then t_e else eoh end - case when t_b > eoh-1 then t_b else eoh-1 end)) usage
from h
left join myTable t on t_e > eoh - 1 and eoh > t_b -- [..) intresection with [..)
group by eoh;
Fiddle

Best Way to Join One Column on Columns From Two Other Tables

I have a schema like the following in Oracle
Section:
+--------+----------+
| sec_ID | group_ID |
+--------+----------+
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| 4 | 2 |
+--------+----------+
Section_to_Item:
+--------+---------+
| sec_ID | item_ID |
+--------+---------+
| 1 | 1 |
| 1 | 2 |
| 2 | 3 |
| 2 | 4 |
+--------+---------+
Item:
+---------+------+
| item_ID | data |
+---------+------+
| 1 | a |
| 2 | b |
| 3 | c |
| 4 | d |
+---------+------+
Item_Version:
+---------+----------+--------+
| item_ID | start_ID | end_ID |
+---------+----------+--------+
| 1 | 1 | |
| 2 | 1 | 3 |
| 3 | 2 | |
| 4 | 1 | 2 |
+---------+----------+--------+
Section_to_Item has FK into Section and Item on the *_ID columns.
Item_version is indexed on item_ID but has no FK to Item.item_ID (ran out of space in the snapshot group).
I have code that receives a list of version IDs and I want to get all items in sections in a given group that are valid for at least one of the versions passed in. If an item has no end_ID, it's valid for anything starting with start_ID. If it has an end_id, it's valid for anything up until (not including) end_ID.
What I currently have is:
SELECT Items.data
FROM Section, Section_to_Items, Item, Item_Version
WHERE Section.group_ID = 1
AND Section_to_Item.sec_ID = Section.sec_ID
AND Item.item_ID = Section_to_Item.item_ID
AND Item.item_ID = Item_Version.item_ID
AND exists (
SELECT *
FROM (
SELECT 2 AS version FROM DUAL
UNION ALL SELECT 3 AS version FROM DUAL
) passed_versions
WHERE Item_Version.start_ID <= passed_versions.version
AND (Item_Version.end_ID IS NULL or Item_Version.end_ID > passed_version.version)
)
Note that the UNION ALL statement is dynamically generated from the list of passed in versions.
This query currently does a cartesian join and is very slow.
For some reason, if I change the query to join
AND Item_Version.item_ID = Section_to_Item.item_ID
which is not a FK, the query does not do the cartesian join and is much faster.
A) Can anyone explain why this is?
B) Is this the right way to be joining this sequence of tables (I feel weird about joining Item.item_ID to two different tables)
C) Is this the right way to get versions between start_ID and end_ID?
Edit
Same query with inner join syntax:
SELECT Items.data
FROM Item
INNER JOIN Section_to_Items ON Section_to_Items.item_ID = Item.item_ID
INNER JOIN Section ON Section.sec_ID = Section_to_Items.sec_ID
INNER JOIN Item_Version ON Item_Version.item_ID = Item_.item_ID
WHERE Section.group_ID = 1
AND exists (
SELECT *
FROM (
SELECT 2 AS version FROM DUAL
UNION ALL SELECT 3 AS version FROM DUAL
) passed_versions
WHERE Item_Version.start_ID <= passed_versions.version
AND (Item_Version.end_ID IS NULL or Item_Version.end_ID > passed_version.version)
)
Note that in this case the performance difference comes from joining on Item_Version first and then joining Section_to_Item on Item_Version.item_ID.
In terms of table size, Section_to_Item, Item, and Item_Version should be similar (1000s) while Section should be small.
Edit
I just found out that apparently, the schema has no FKs. The FKs specified in the schema configuration files are ignored. They're just there for documentation. So there's no difference between joining on a FK column or not. That being said, by changing the joins into a cascade of SELECT INs, I'm able to avoid joining the entire Item table twice. I don't love the resulting query, and I don't really understand the difference, but the stats indicate it's much less work (changes the A-Rows returned from the inner most scan on Section from 656,000 to 488 (it used to be 656k starts returning 1 row, now it's 488 starts returning 1 row)).
Edit
It turned out to be stale statistics - the two queries were equivalent the whole time but with the incomplete statistics, the DB happened to notice the correct plan only in the second instance. After updating statistics, both queries generated the same plan.
I'm not sure if this is the best idea but this seems to avoid the cartesian join:
select data
from Item
where item_ID in (
select item_ID
from Item_Version
where item_ID in (
select item_ID
from Section_to_Item
where sec_ID in (
select sec_ID
from Section
where group_ID = 1
)
)
and exists (
select 1
from (
select 2 as version
from dual
union all
select 3 as version
from dual
) versions
where versions.version >= start_ID
and (end_ID is null or versions.version <)
)
)

Select and count in the same query on two tables

I've got these two tables:
___Subscriptions
|--------|--------------------|--------------|
| SUB_Id | SUB_HotelId | SUB_PlanName |
|--------|--------------------|--------------|
| 1 | cus_AjGG401e9a840D | Free |
|--------|--------------------|--------------|
___Rooms
|--------|-------------------|
| ROO_Id | ROO_HotelId |
|--------|-------------------|
| 1 |cus_AjGG401e9a840D |
| 2 |cus_AjGG401e9a840D |
| 3 |cus_AjGG401e9a840D |
| 4 |cus_AjGG401e9a840D |
|--------|-------------------|
I'd like to select the SUB_PlanName and count the rooms with the same HotelId.
So I tried:
SELECT COUNT(*) as 'ROO_Count', SUB_PlanName
FROM ___Rooms
JOIN ___Subscriptions
ON ___Subscriptions.SUB_HotelId = ___Rooms.ROO_HotelId
WHERE ROO_HotelId = 'cus_AjGG401e9a840D'
and
SELECT
SUB_PlanName,
(
SELECT Count(ROO_Id)
FROM ___Rooms
Where ___Rooms.ROO_HotelId = ___Subscriptions.SUB_HotelId
) as ROO_Count
FROM ___Subscriptions
WHERE SUB_HotelId = 'cus_AjGG401e9a840D'
But I get empty datas.
Could you please help ?
Thanks.
You need to use GROUP BY whenever you do some aggregation(here COUNT()). Below query will give you the number of ROO_ID only for the SUB_HotelId = 'cus_AjGG401e9a840D' because you have this condition in WHERE. If you want the COUNTs for all Hotel_IDs then you can simply remove the WHERE filter from this query.
SELECT s.SUB_PlanName, COUNT(*) as 'ROO_Count'
FROM ___Rooms r
JOIN ___Subscriptions s
ON s.SUB_HotelId = r.ROO_HotelId
WHERE r.ROO_HotelId = 'cus_AjGG401e9a840D'
GROUP BY s.SUB_PlanName;
To be safe, you can also use COUNT(DISTINCT r.ROO_Id) if you don't want to double count a repeating ROO_Id. But your table structures seem to have unique(non-repeating) ROO_Ids so using a COUNT(*) should work as well.

SQL union / join / intersect multiple select statements

I have two select statements. One gets a list (if any) of logged voltage data in the past 60 seconds and related chamber names, and one gets a list (if any) of logged arc event data in the past 5 minutes. I am trying to append the arc count data as new columns to the voltage data table. I cannot figure out how to do this.
Note that, there may or may not be arc count rows, for a given chamber name that is in the voltage data table. If there are no rows, I want to set the arc count column value to zero.
Any ideas on how to accomplish this?
Voltage Data:
SELECT DISTINCT dbo.CoatingChambers.Name,
AVG(dbo.CoatingGridVoltage_Data.ChanA_DCVolts) AS ChanADC,
AVG(dbo.CoatingGridVoltage_Data.ChanB_DCVolts) AS ChanBDC,
AVG(dbo.CoatingGridVoltage_Data.ChanA_RFVolts) AS ChanARF,
AVG(dbo.CoatingGridVoltage_Data.ChanB_RFVolts) AS ChanBRF FROM
dbo.CoatingGridVoltage_Data LEFT OUTER JOIN dbo.CoatingChambers ON
dbo.CoatingGridVoltage_Data.CoatingChambersID =
dbo.CoatingChambers.CoatingChambersID WHERE
(dbo.CoatingGridVoltage_Data.DT > DATEADD(second, - 60,
SYSUTCDATETIME())) GROUP BY dbo.CoatingChambers.Name
Returns
Name | ChanADC | ChanBDC | ChanARF | ChanBRF
-----+-------------------+--------------------+---------------------+------------------
OX2 | 2.9099999666214 | -0.485000004371007 | 0.344801843166351 | 0.49748428662618
S2 | 0.100000001490116 | -0.800000016887983 | 0.00690172302226226 | 0.700591623783112
S3 | 4.25666658083598 | 0.5 | 0.96554297208786 | 0.134956782062848
Arc count table:
SELECT CoatingChambers.Name,
SUM(ArcCount) as ArcCount
FROM CoatingChambers
LEFT JOIN CoatingArc_Data
ON dbo.[CoatingArc_Data].CoatingChambersID = dbo.CoatingChambers.CoatingChambersID
where EventDT > DATEADD(mi,-5, GETDATE())
Group by Name
Returns
Name | ArcCount
-----+---------
L1 | 283
L4 | 0
L6 | 1
S2 | 55
To be clear, I want this table (with added arc count column), given the two tables above:
Name | ChanADC | ChanBDC | ChanARF | ChanBRF | ArcCount
-----+-------------------+--------------------+---------------------+-------------------+---------
OX2 | 2.9099999666214 | -0.485000004371007 | 0.344801843166351 | 0.49748428662618 | 0
S2 | 0.100000001490116 | -0.800000016887983 | 0.00690172302226226 | 0.700591623783112 | 55
S3 | 4.25666658083598 | 0.5 | 0.96554297208786 | 0.134956782062848 | 0
You can treat the select statements as virtual tables and just join them together:
select
x.Name,
x.ChanADC,
x.ChanBDC,
x.ChanARF,
x.ChanBRF,
isnull( y.ArcCount, 0 ) ArcCount
from
(
select distinct
cc.Name,
AVG(cgv.ChanA_DCVolts) AS ChanADC,
AVG(cgv.ChanB_DCVolts) AS ChanBDC,
AVG(cgv.ChanA_RFVolts) AS ChanARF,
AVG(cgv.ChanB_RFVolts) AS ChanBRF
from
dbo.CoatingGridVoltage_Data cgv
left outer join
dbo.CoatingChambers cc
on
cgv.CoatingChambersID = cc.CoatingChambersID
where
cgv.DT > dateadd(second, - 60, sysutcdatetime())
group by
cc.Name
) as x
left outer join
(
select
cc.Name,
sum(ac.ArcCount) as ArcCount
from
dbo.CoatingChambers cc
left outer join
dbo.CoatingArc_Data ac
on
ac.CoatingChambersID = cc.CoatingChambersID
where
EventDT > dateadd(mi,-5, getdate())
group by
Name
) as y
on
x.Name = y.Name
Also, it's worthwhile to simplify your names with aliases and format the queries for readability...which I shamelessly took a stab at.

CTE to represent a logical table for the rows in a table which have the max value in one column

I have an "insert only" database, wherein records aren't physically updated, but rather logically updated by adding a new record, with a CRUD value, carrying a larger sequence. In this case, the "seq" (sequence) column is more in line with what you may consider a primary key, but the "id" is the logical identifier for the record. In the example below,
This is the physical representation of the table:
seq id name | CRUD |
----|-----|--------|------|
1 | 10 | john | C |
2 | 10 | joe | U |
3 | 11 | kent | C |
4 | 12 | katie | C |
5 | 12 | sue | U |
6 | 13 | jill | C |
7 | 14 | bill | C |
This is the logical representation of the table, considering the "most recent" records:
seq id name | CRUD |
----|-----|--------|------|
2 | 10 | joe | U |
3 | 11 | kent | C |
5 | 12 | sue | U |
6 | 13 | jill | C |
7 | 14 | bill | C |
In order to, for instance, retrieve the most recent record for the person with id=12, I would currently do something like this:
SELECT
*
FROM
PEOPLE P
WHERE
P.ID = 12
AND
P.SEQ = (
SELECT
MAX(P1.SEQ)
FROM
PEOPLE P1
WHERE P.ID = 12
)
...and I would receive this row:
seq id name | CRUD |
----|-----|--------|------|
5 | 12 | sue | U |
What I'd rather do is something like this:
WITH
NEW_P
AS
(
--CTE representing all of the most recent records
--i.e. for any given id, the most recent sequence
)
SELECT
*
FROM
NEW_P P2
WHERE
P2.ID = 12
The first SQL example using the the subquery already works for us.
Question: How can I leverage a CTE to simplify our predicates when needing to leverage the "most recent" logical view of the table. In essence, I don't want to inline a subquery every single time I want to get at the most recent record. I'd rather define a CTE and leverage that in any subsequent predicate.
P.S. While I'm currently using DB2, I'm looking for a solution that is database agnostic.
This is a clear case for window (or OLAP) functions, which are supported by all modern SQL databases. For example:
WITH
ORD_P
AS
(
SELECT p.*, ROW_NUMBER() OVER ( PARTITION BY id ORDER BY seq DESC) rn
FROM people p
)
,
NEW_P
AS
(
SELECT * from ORD_P
WHERE rn = 1
)
SELECT
*
FROM
NEW_P P2
WHERE
P2.ID = 12
PS. Not tested. You may need to explicitly list all columns in the CTE clauses.
I guess you already put it together. First find the max seq associated with each id, then use that to join back to the main table:
WITH newp AS (
SELECT id, MAX(seq) AS latestseq
FROM people
GROUP BY id
)
SELECT p.*
FROM people p
JOIN newp n ON (n.latestseq = p.seq)
ORDER BY p.id
What you originally had would work, or moving the CTE into the "from" clause. Maybe you want to use a timestamp field rather than a sequence number for the ordering?
Following up from #Glenn's answer, here is an updated query which meets my original goal and is on par with #mustaccio's answer, but I'm still not sure what the performance (and other) implications of this approach vs the other are.
WITH
LATEST_PERSON_SEQS AS
(
SELECT
ID,
MAX(SEQ) AS LATEST_SEQ
FROM
PERSON
GROUP BY
ID
)
,
LATEST_PERSON AS
(
SELECT
P.*
FROM
PERSON P
JOIN
LATEST_PERSON_SEQS L
ON
(
L.LATEST_SEQ = P.SEQ)
)
SELECT
*
FROM
LATEST_PERSON L2
WHERE
L2.ID = 12