INSERT SELECT loop - sql

I am trying to transfer data from one table to another. But in the process I need to do something extra I am just wondering is it possible to do something like this in SQL or PL/SQL alone.
source target
------------------- ------------------------
| id | name | qty | | id | source_id | qty |
------------------- ------------------------
| 1 | test | 2 | | 1 | 1 | 1 |
------------------- ------------------------
| 2 | ago | 1 | | 2 | 1 | 1 |
------------------- ------------------------
| 3 | 2 | 1 |
-----------------------
Here based on the quantity in source table I will have to insert multiple records. Quantity could be of any number. ID in target table is auto incremented. I tried this
INSERT INTO target (SELECT id, qty FROM source);
But this does not take care of the qty loop.

Plain SQL:
with
inputs ( id, qty ) as (
select 1, 2 from dual union all
select 2, 1 from dual union all
select 3, 5 from dual
)
-- end of test data; solution (SQL query) begins below this line
select row_number() over (order by id) as id, id as source_id, 1 as qty
from inputs
connect by level <= qty
and prior id = id
and prior sys_guid() is not null
;
NOTE - if the id is generated automatically, just drop the row_number().... as id column; the rest is unchanged.
ID SOURCE_ID QTY
-- --------- --
1 1 1
2 1 1
3 2 1
4 3 1
5 3 1
6 3 1
7 3 1
8 3 1

This is possible using SQL. Use CTE to generate amount of rows that matches your maximum qty from source table and use non-equi JOIN for generating rows. Use row_number analytic function to assign each row it's unique id (if you have it in your target table, check below on my Edit):
with gen_numbers(r) as (
select rownum r
from dual
connect by rownum <= (select max(qty) from src) -- our maximum limit of rows needed
)
select
row_number() over (order by src.id) as id,
src.id as source_id,
1 as qty
from src
join gen_numbers on src.qty <= gen_numbers.r; -- clone rows qty times
Note that you can safely put in constans value 1 in the output of qty.
Your test data:
create table src (id int, name varchar(255), qty int);
insert into src (id, name, qty)
select 1, 'test', 2 from dual union all
select 2, 'ago', 1 from dual
;
Result:
ID SOURCE_ID QTY
1 2 1
2 2 1
3 1 1
Edit: Since your target id column is auto incremented, you don't need the row_number. Just specify it like that to perform an INSERT:
with gen_numbers(r) as (
select rownum r
from dual
connect by rownum <= (select max(qty) from src) -- our maximum limit of rows needed
)
insert into target_table(source_id, qty)
select
src.id as source_id,
1 as qty
from src
join gen_numbers on src.qty <= gen_numbers.r; -- clone rows qty times
order by src.id
Notice that I've added an ORDER BY clause to ensure proper ordering of inserting values.

INSERT INTO TARGET(source_id, qty)
WITH
output
AS
(
SELECT id, qty FROM source
UNION ALL
SELECT id, qty - 1 FROM source WHERE qty > 1
)
SELECT
id, count(*) as qty
FROM output
group by
id, quantity
ORDER BY
id

Related

how to insert row without duplicate values in two differents table ( header and detail)

I have the follow scenario, I need to insert header and details, they are two different tables.
The field order should be insert unique order without duplicate values and the detail get the id from the header
the data received by csv is:
order, number, qty, price
-------------------------
1000,a1000,1,2.0
1000,a1001,2,3.0
1001,a1000,1,3.0
1001,a1001,1,3.0
1001,a1000,1,3.0
I have the follow function in pgsql:
this query does not work, it is duplicating the records. How I can solve this problem?
INSERT INTO public.HeaderTable ( order )
SELECT
order
FROM
public.HeaderTable
WHERE
NOT EXISTS (
SELECT
idpo
FROM
public.HeaderTable
WHERE
order = '1000'
)
LIMIT 1;
This second query I don't know how to make to insert the detail, get the id and all this thing if not exist the row else not insert the row...
INSERT INTO public.DetailsTable ( idh, product, qty )
SELECT
order
FROM
public.HeaderTable
WHERE
NOT EXISTS (
SELECT
idpo
FROM
public.HeaderTable
WHERE
order = '1000'
) LIMIT 1;
expected result:
note: this is the expected result insert
HeaderTable:
id | order
------------
1 | 1000
2 | 1001
DetailsTable:
id | idh | product | qty
----------------------------
1 | 1 | a1000 | 2.0
2 | 1 | a1001 | 3.0
3 | 2 | a1000 | 3.0
4 | 2 | a1001 | 3.0

Sequence generation for new ID

I have a requirement where there are 2 tables A and B. There is a column named ID which is a primary key for A and foreign key for table B where the tables have one to many relationship between them. In table A for ID column we have entries 1,2,3 and for them we have corresponding multiple entries for one ID and another column in table B named SEQ where a sequence has been created for entries in it. Now for entries for ID 1, I have 3 entries in table B with SEQ 1,2,3 but when new ID entry will be there then i need the sequence again to start from 1 for that ID.**Can you please help me to do that.
TABLE A
I'd suggest you not to store the SEQ value. Why would you? It is easy to calculate it whenever needed. How? Like this, using row_number analytic function:
SQL> with b (id, name) as
2 (select 1, 'TRI' from dual union all
3 select 1, 'TRI' from dual union all
4 select 1, 'TRI' from dual union all
5 select 2, 'ROHIT' from dual union all
6 select 2, 'ROHIT' from dual union all
7 select 3, 'RAVI' from dual
8 )
9 select id,
10 name,
11 row_number() over (partition by id order by null) seq
12 from b;
ID NAME SEQ
---------- ----- ----------
1 TRI 1
1 TRI 2
1 TRI 3
2 ROHIT 1
2 ROHIT 2
3 RAVI 1
6 rows selected.
SQL>
If you still want to store it, now you know how.
Don't have multiple sequences. Just use a single sequence in the B table and accept that there will be gaps for each ID and then if you need sequential values you can calculate them as needed using the ROW_NUMBER() analytic function and, if you need to, put it in a view. Also, don't duplicate the name from table A in table B; keep your data normalised.
CREATE TABLE A (
id NUMBER(8,0)
GENERATED ALWAYS AS IDENTITY
CONSTRAINT A__id__pk PRIMARY KEY,
name VARCHAR2(20)
);
CREATE TABLE B (
id NUMBER(8,0)
CONSTRAINT B__id__nn NOT NULL
CONSTRAINT B__id__fk REFERENCES A(id),
seq NUMBER(8,0)
GENERATED ALWAYS AS IDENTITY
CONSTRAINT B__seq__pk PRIMARY KEY
);
Then you can create your sample data:
INSERT INTO A ( name )
SELECT 'TRI' FROM DUAL UNION ALL
SELECT 'ROHIT' FROM DUAL UNION ALL
SELECT 'RAVI' FROM DUAL;
INSERT INTO B ( id )
SELECT 1 FROM DUAL UNION ALL
SELECT 1 FROM DUAL UNION ALL
SELECT 2 FROM DUAL UNION ALL
SELECT 3 FROM DUAL UNION ALL
SELECT 2 FROM DUAL UNION ALL
SELECT 1 FROM DUAL;
And:
SELECT *
FROM B
Outputs:
ID | SEQ
-: | --:
1 | 1
1 | 2
2 | 3
3 | 4
2 | 5
1 | 6
If you want your output then create a view:
CREATE VIEW B_view ( id, name, seq ) AS
SELECT b.id,
a.name,
ROW_NUMBER() OVER ( PARTITION BY b.id ORDER BY seq )
FROM B
INNER JOIN A
ON ( B.id = A.id )
Then:
SELECT *
FROM b_view
Outputs:
ID | NAME | SEQ
-: | :---- | --:
1 | TRI | 1
1 | TRI | 2
1 | TRI | 3
2 | ROHIT | 1
2 | ROHIT | 2
3 | RAVI | 1
db<>fiddle here

SQL How to filter table with values having more than one unique value of another column

I have data table Customers that looks like this:
ID | Sequence No |
1 | 1 |
1 | 2 |
1 | 3 |
2 | 1 |
2 | 1 |
2 | 1 |
3 | 1 |
3 | 2 |
I would like to filter the table so that only IDs with more than 1 distinct count of Sequence No remain.
Expected output:
ID | Sequence No |
1 | 1 |
1 | 2 |
1 | 3 |
3 | 1 |
3 | 2 |
I tried
select ID, Sequence No
from Customers
where count(distinct Sequence No) > 1
order by ID
but I'm getting error. How to solve this?
You can get the desired result by using the below query. This is similar to what you were trying -
Sample Table & Data
Declare #Data table
(Id int, [Sequence No] int)
Insert into #Data
values
(1 , 1 ),
(1 , 2 ),
(1 , 3 ),
(2 , 1 ),
(2 , 1 ),
(2 , 1 ),
(3 , 1 ),
(3 , 2 )
Query
Select * from #Data
where ID in(
select ID
from #Data
Group by ID
Having count(distinct [Sequence No]) > 1
)
Using analytic functions, we can try:
WITH cte AS (
SELECT *, MIN([Sequence No]) OVER (PARTITION BY ID) min_seq,
MAX([Sequence No]) OVER (PARTITION BY ID) max_seq
FROM Customers
)
SELECT ID, [Sequence No]
FROM cte
WHERE min_seq <> max_seq
ORDER BY ID, [Sequence No];
Demo
We are checking for a distinct count of sequence number by asserting that the minimum and maximum sequence numbers are not the same for a given ID. The above query could benefit from the following index:
CREATE INDEX idx ON Customers (ID, [Sequence No]);
This would let the min and max values be looked up faster.

Count Values associated with key in Sql Server

I have three tables
Table Category
CategoryID Category
1 Climate
2 Area
Table CategoryDetail
DetailID CategoryID Desc
1 1 Hot
2 1 Cold
3 2 Area1
Table CategoryDetailValues
PK AnotherFK CategoryDetailID
1 1 1
2 1 1
3 1 2
4 2 1
Here AnotherFK is foreign key referring to another table. In record 1 and 2 duplicate exists that's ok but AnotherFK 1 has reference of CategoryDetailID 1 and 2 which has categoryID of 1 which is not ok
So from above tables
this result is valid from above three table
PK AnotherFK CategoryID DetailID Desc
1 1 1 1 Hot
2 1 1 1 Hot
But below result is not valid
PK AnotherFK CategoryID DetailID Desc
2 1 1 1 Hot
3 1 1 2 Cold
I can not put same AnotherFK in two different DetailID which has same CategoryID. I could have eliminated this by introducing CategoryID in CategoryDetailValues table and creating unique constraint but I am not allowed to do so.
Now my aim is to find all those record in CategoryDetailValues table which has different DetailID that are associated with same CategoryID. So that I can delete them.
Trying to achieve this in SQL Server 2012.
If your goal is to highlight all AnotherFK cases that have the same CategoryID, but differenty DetailIDs, the following ought to do the trick (pseudo-code):
SELECT * FROM (SELECT AnotherFK, ROW_NUMBER() OVER
(ORDER BY AnotherFK, CategoryID) AS rn FROM #myTable) AS a
WHERE rn > 1
Sample code:
CREATE TABLE #myTable
(
AnotherFK int
, CategoryID int
, DetailID int
) ;
INSERT INTO #myTable (
AnotherFK
, CategoryID
, DetailID
)
VALUES (1, 1, 1)
, (1, 1, 2);
SELECT * FROM (SELECT AnotherFK, ROW_NUMBER() OVER (ORDER BY AnotherFK, CategoryID) AS rn FROM #myTable) AS a
WHERE rn > 1
DROP TABLE #myTable
If this is not what you are after, please elaborate
I think you could use something like this:
Script to create sample tables:
CREATE TABLE mytable(
PK INTEGER NOT NULL PRIMARY KEY
,AnotherFK INTEGER NOT NULL
,CategoryDetailID INTEGER NOT NULL
);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (1,1,1);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (2,1,1);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (3,1,2);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (4,2,1);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (5,1,3);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (6,1,3);
INSERT INTO mytable(PK,AnotherFK,CategoryDetailID) VALUES (7,1,3);
CREATE TABLE mytable2(
DetailID INTEGER NOT NULL
,CategoryID INTEGER NOT NULL
,Descr VARCHAR(5) NOT NULL
);
Query to show "suspect" record (I think you have to decide what records delete...):
SELECT * FROM (
SELECT * ,COUNT(*) OVER (PARTITION BY CategoryID, ANotherFK) AS X
, COUNT(*) OVER (PARTITION BY CategoryID, DetailID, ANotherFK) AS X1
FROM mytable A
INNER JOIN mytable2 B ON A.CategoryDetailID= B.DetailID
)C
WHERE X-X1 >0
Output:
+--+----+-----------+------------------+----------+------------+-------+---+----+
| | PK | AnotherFK | CategoryDetailID | DetailID | CategoryID | Descr | X | X1 |
+--+----+-----------+------------------+----------+------------+-------+---+----+
| | 1 | 1 | 1 | 1 | 1 | Hot | 3 | 2 |
| | 2 | 1 | 1 | 1 | 1 | Hot | 3 | 2 |
| | 3 | 1 | 2 | 2 | 1 | Cold | 3 | 1 |
+--+----+-----------+------------------+----------+------------+-------+---+----+
This query will look in Categorydetail for records with duplicate DetailID. Than join the tables to provide you the details. It's still up to you to decide which records should be deleted.
select *
from(
Select CategoryID
from CategoryDetail
group by CategoryID
having count(DetailID)>1)aggr
join CategoryDetail c on aggr.CategoryID = c.CategoryID
join CategoryDetailValues v on c.CategoryDetailID = v.CategoryDetailID
You want one value per AnotherFK and Category. So the third table should have a composite key:
CategoryDetailValues(AnotherFK, CategoryID, DetailID, HowMany)
with a unique constraint on AnotherFK, CategoryID and both building a foreign key to CategoryDetail(CategoryID, DetailID).
In order to clean up data first, you'd have to look for ambiguities:
select AnotherFK, CategoryID, DetailID
from
(
select
cdv.AnotherFK, cd.CategoryID, cdv.DetailID,
count(distinct cd.DetailID) over (partition by cdv.AnotherFK, cd.CategoryID) as cnt
from CategoryDetailValues cdv
join CategoryDetail cd on cd.DetailID = cdv.CategoryDetailID
)
where cnt > 1
order by AnotherFK, CategoryID, DetailID
You could try this solution that comes with following assumption: within CDV table, for every [AnotherFK] value (ex. 1) should be displayed only those rows with the minimum [CategoryDetailID] (ex. 1)
SELECT *
FROM (
SELECT cdv.PK, cdv.AnotherFK, cd.CategoryID, cd.[Desc],
Rnk = DENSE_RANK() OVER(PARTITION BY cdv.AnotherFK ORDER BY cdv.CategoryDetailID)
FROM dbo.CategoryDetailValues cdv
JOIN dbo.CategoryDetail cd ON cd.DetailID = cdv.DetailID
WHERE cdv.AnotherFK = 1
) x
WHERE x.Rnk = 1

SQL query update by grouping

I'm dealing with some legacy data in an Oracle table and have the following
--------------------------------------------
| RefNo | ID |
--------------------------------------------
| FOO/BAR/BAZ/AAAAAAAAAA | 1 |
| FOO/BAR/BAZ/BBBBBBBBBB | 1 |
| FOO/BAR/BAZ/CCCCCCCCCC | 1 |
| FOO/BAR/BAZ/DDDDDDDDDD | 1 |
--------------------------------------------
For each of the /FOO/BAR/BAZ/% records I want to make the ID a Unique incrementing number.
Is there a method to do this in SQL?
Thanks in advance
EDIT
Sorry for not being specific. I have several groups of records /FOO/BAR/BAZ/, /FOO/ZZZ/YYY/. The same transformation needs to occur for each of these other (example) groups. The recnum can't be used I want ID to start from 1, incrementing, for each group of records I have to change.
Sorry for making a mess of my first post. Output should be
--------------------------------------------
| RefNo | ID |
--------------------------------------------
| FOO/BAR/BAZ/AAAAAAAAAA | 1 |
| FOO/BAR/BAZ/BBBBBBBBBB | 2 |
| FOO/BAR/BAZ/CCCCCCCCCC | 3 |
| FOO/BAR/BAZ/DDDDDDDDDD | 4 |
| FOO/ZZZ/YYY/AAAAAAAAAA | 1 |
| FOO/ZZZ/YYY/BBBBBBBBBB | 2 |
--------------------------------------------
Let's try something like this(Oracle version 10g and higher):
SQL> with t1 as(
2 select 'FOO/BAR/BAZ/AAAAAAAAAA' as RefNo, 1 as ID from dual union all
3 select 'FOO/BAR/BAZ/BBBBBBBBBB', 1 from dual union all
4 select 'FOO/BAR/BAZ/CCCCCCCCCC', 1 from dual union all
5 select 'FOO/BAR/BAZ/DDDDDDDDDD', 1 from dual union all
6 select 'FOO/ZZZ/YYY/AAAAAAAAAA', 1 from dual union all
7 select 'FOO/ZZZ/YYY/BBBBBBBBBB', 1 from dual union all
8 select 'FOO/ZZZ/YYY/CCCCCCCCCC', 1 from dual union all
9 select 'FOO/ZZZ/YYY/DDDDDDDDDD', 1 from dual
10 )
11 select row_number() over(partition by ComPart order by DifPart) as id
12 , RefNo
13 From (select regexp_substr(RefNo, '[[:alpha:]]+$') as DifPart
14 , regexp_substr(RefNo, '([[:alpha:]]+/)+') as ComPart
15 , RefNo
16 , Id
17 from t1
18 ) q
19 ;
ID REFNO
---------- -----------------------
1 FOO/BAR/BAZ/AAAAAAAAAA
2 FOO/BAR/BAZ/BBBBBBBBBB
3 FOO/BAR/BAZ/CCCCCCCCCC
4 FOO/BAR/BAZ/DDDDDDDDDD
1 FOO/ZZZ/YYY/AAAAAAAAAA
2 FOO/ZZZ/YYY/BBBBBBBBBB
3 FOO/ZZZ/YYY/CCCCCCCCCC
4 FOO/ZZZ/YYY/DDDDDDDDDD
I think that actual updating the ID column wouldn't be a good idea. Every time you add new groups of data you would have to run the update statement again. The better way would be creating a view and you will see desired output every time you query it.
rownum can be used as an incrementing ID?
UPDATE legacy_table
SET id = ROWNUM;
This will assign unique values to all records in the table. This link contains documentation about Oracle Pseudocolumn.
You can run the following:
update <table_name> set id = rownum where descr like 'FOO/BAR/BAZ/%'
This is pretty rough and I'm not sure if your RefNo is a single value column or you just made it like that for simplicity.
select
sub.RefNo
row_number() over (order by sub.RefNo) + (select max(id) from TABLE),
from (
select FOO+'/'+BAR+'/'+BAZ+'/'+OTHER as RefNo
from TABLE
group by FOO+'/'+BAR+'/'+BAZ+'/'+OTHER
) sub