sql multiple insert on one table, while looping/iterating over another table? - sql

I have two table "TempStore" and "Store" with the same column called "Items".
There is data in "TempStore" table which I need to move over to "Store" table which requires few modifications.
I need to iterate over "TempStore" data (i.e. items) and insert into Store...
More specifically: How can I iterate over TempStore (in sql) where "for each item in 'TempStore' I need to store 2 or maybe 3 items in 'Store' with little modification", how can I accomplish this?
What I want to do is take each rowdata from "[SELECT * FROM TempStore]" and insert three records in "Store" with being able to change "items"

try INSERT-SELECT:
INSERT INTO Store
(col1, col2, col3...)
SELECT
col1, col2, col3...
FROM TempStore
WHERE ...
just make the SELECT return one row for every insert, and produce the values in the Cols that you need. You might need CASE and a join to another table to make the extra rows.
EDIT based on comments, OP wanted to see the numbers table in action
Lets say TempStore table has {Items,
Cost, Price, ActualCost, ActualPrice}
But in the Store table I need to store
{Items, Cost, Price}. The ActualCost
and ActualPrice from TempStore datarow
would need to be added as another row
in Store....(I hope this makes
sense)....Anyways, is the solution
using "WHILE-BEGIN-END"??
CREATE TABLE Numbers (Number int NOT NULL PRIMARY KEY)
INSERT INTO Numbers VALUES(1)
INSERT INTO Numbers VALUES(2)
INSERT INTO Numbers VALUES(3)
INSERT INTO Store
(Items, Cost, Price)
SELECT
t.Items, t.Cost
,CASE
WHEN n.Number=1 THEN t.Price
WHEN n.Number=2 THEN t.ActualCost
ELSE t.ActualPrice
END
FROM TempStore t
INNER JOIN Numbers N ON n.Number<=3
WHERE ...
you could even use a UNION:
INSERT INTO Store
(Items, Cost, Price)
SELECT
t.Items, t.Cost, t.Price
FROM TempStore t
UNION ALL
SELECT
t.Items, t.Cost, t.ActualCost
FROM TempStore t
UNION ALL
SELECT
t.Items, t.Cost, t.ActualPrice
FROM TempStore t
either the Numbers table or the UNION will we WAY better than a loop!

OK, I think KM has proposed an excellent solution involving a "numbers table". However, VoodooChild has requested in a comment that I post example code for my suggestion of using WHILE-BEGIN-END around an INSERT-SELECT.
I have created two tables like VoodooChild's Store and TempStore.
Store has columns StoreID, StoreName, StoreState, StoreNumber.
TempStore has columns TempStoreID, TempStoreName.
I prepopulated TempStoreName with the values First, Second, Third and Fourth.
Now, my SQL will insert three records into the Store table for every record in the TempStore table that meets the condition in the WHERE clause. That condition is the length of the TempStoreName, obviously not a real-world example.
DECLARE #counter int
SET #counter = 0;
WHILE #counter < 3
BEGIN
INSERT INTO Store (StoreName, StoreState, StoreNumber)
SELECT TempStoreName, 'AZ', #counter FROM TempStore WHERE LEN(TempStoreName) = 5
SET #counter = #counter + 1
END
The result of this when applied to an empty Store table is:
StoreID StoreName StoreState StoreNumber
1 First AZ 0
2 First AZ 1
3 First AZ 2
4 Third AZ 0
5 Third AZ 1
6 Third AZ 2
So, this approach works. It appears to meet VoodooChild's needs. It may or may not be the very best choice, but there are other factors involved in the decision that we don't know, such as how many times this operation is going to be repeated.

INSERT INTO Store ( SELECT * FROM TempStore UNION ALL SELECT * FROM TempStore )
The above statement will insert two rows in the store for each row in the TempStore. You can change the SELECT * to whatever modification that you want to do to the item.

Given your latest comment, this should give you what you need. You should probably have some way of differentiating the values in your Stores table once they get there. Perhaps an "actual" BIT column or something similar:
INSERT INTO Stores (item, cost, price, actual)
SELECT item, cost, price, 0
FROM TempStores
UNION ALL
SELECT item, actual_cost, actual_price, 1
FROM TempStores
If you needed to adjust the columns (for example, increase actual_price by 10%) then you could do this:
INSERT INTO Stores (item, cost, price, actual)
SELECT item, cost, price, 0
FROM TempStores
UNION ALL
SELECT item, actual_cost, 1.1 * actual_price, 1
FROM TempStores
WHERE actual_cost IS NOT NULL
I also added a WHERE clause to the second SELECT statement to show that you can filter the rows. That WHERE clause will only affect the second SELECT. So, you could also do this:
INSERT INTO Stores (item, cost, price, actual)
SELECT item, cost, price, 0
FROM TempStores
WHERE cost IS NOT NULL
UNION ALL
SELECT item, actual_cost, 1.1 * actual_price, 1
FROM TempStores
WHERE actual_cost IS NOT NULL

Related

postgresql unnest and pivot int array column

I have below table
create table test(id serial, key int,type text,words text[],numbers int[] );
insert into test(key,type,words) select 1,'Name',array['Table'];
insert into test(key,type,numbers) select 1,'product_id',array[2];
insert into test(key,type,numbers) select 1,'price',array[40];
insert into test(key,type,numbers) select 1,'Region',array[23,59];
insert into test(key,type,words) select 2,'Name',array['Table1'];
insert into test(key,type,numbers) select 2,'product_id',array[1];
insert into test(key,type,numbers) select 2,'price',array[34];
insert into test(key,type,numbers) select 2,'Region',array[23,59,61];
insert into test(key,type,words) select 3,'Name',array['Chair'];
insert into test(key,type,numbers) select 3,'product_id',array[5];
I was using below query to pivot table for users.
select key,
max(array_to_string(words,',')) filter(where type='Name') as "Name",
cast(max(array_to_string(numbers,',')) filter(where type='product_id') as int) as "product_id",
cast(max(array_to_string(numbers,',')) filter(where type='price') as int) as "price" ,
max(array_to_string(numbers,',')) filter(where type='Region') as "Region"
from test group by key
But I couldn't unnest the Region column during Pivot in-order to use Region column to join with another table .
My expected output is below
Since we are using unnest("Region") to do to pivot. There must be a row with region data for each product.
Or below code will do the trick by creating an array of null.
unnest(CASE WHEN array_length("Region", 1) >= 1
THEN "Region"
ELSE '{null}'::int[] END)
Schema:
create table test(id serial, key int,type text,words text[],numbers int[] );
insert into test(key,type,words) select 1,'Name',array['Table'];
insert into test(key,type,numbers) select 1,'product_id',array[2];
insert into test(key,type,numbers) select 1,'price',array[40];
insert into test(key,type,numbers) select 1,'Region',array[23,59];
insert into test(key,type,words) select 2,'Name',array['Table1'];
insert into test(key,type,numbers) select 2,'product_id',array[1];
insert into test(key,type,numbers) select 2,'price',array[34];
insert into test(key,type,numbers) select 2,'Region',array[23,59,61];
insert into test(key,type,words) select 3,'Name',array['Chair'];
insert into test(key,type,numbers) select 3,'product_id',array[5];
select key,"Name",product_id,price,unnest(CASE WHEN array_length("Region", 1) >= 1
THEN "Region"
ELSE '{null}'::int[] END) from
(
select key,
max(array_to_string(words,',')) filter(where type='Name') as "Name",
cast(max(array_to_string(numbers,',')) filter(where type='product_id') as int) as "product_id",
cast(max(array_to_string(numbers,',')) filter(where type='price') as int) as "price" ,
max(numbers) filter(where type='Region') as "Region"
from test group by key
)t order by key
key
Name
product_id
price
unnest
1
Table
2
40
23
1
Table
2
40
59
2
Table1
1
34
23
2
Table1
1
34
59
2
Table1
1
34
61
3
Chair
5
null
null
db<>fiddle here
Very strange database design... I'm assuming you inherited it?
If none of the other array values will ever have a cardinality > 1 then, you can simply unnest:
select
key,
(max (words) filter (where type = 'Name'))[1] as name,
(max (numbers) filter (where type = 'product_id'))[1] as product_id,
(max (numbers) filter (where type = 'price'))[1] as price,
unnest (max (numbers) filter (where type = 'Region')) as region
from test
group by key
If they can have multiple values, that can also be handled.
-- EDIT 3/15/2021 --
Short version: an unnest against a null won't product a row, so if you coalesce the null value into an array of a single null element, that should take care of this part:
select
key,
(max (words) filter (where type = 'Name'))[1] as name,
(max (numbers) filter (where type = 'product_id'))[1] as product_id,
(max (numbers) filter (where type = 'price'))[1] as price,
unnest (coalesce (max (numbers) filter (where type = 'Region'), array[null]::integer[])) as region
from test
group by key
order by key
Now for the part you didn't ask... I and at least one other have been gently nudging you that your database design is going to cause multiple problems at every turn. The fact that it's in production doesn't mean you shouldn't fix it as soon as you can.
This design is what's known as EAV - Entity - Attribute - Value. It has its use cases, but like most good things it can also be applied when it shouldn't. The use case that comes to mind is if you want users to be able to dynamically add attributes to certain objects. Even then, there might be better/easier ways.
And as one example, if you have one million objects, five attributes means you have to store that as five million rows, and the majority of that space will be occupied with repeating the key and attribute names.
Just food for thought. We can continue to triage this with every new scenario you find, but it would be better to redo the design.

Generating Lines based on a value from a column in another table

I have the following table:
EventID=00002,DocumentID=0005,EventDesc=ItemsReceived
I have the quantity in another table
DocumentID=0005,Qty=20
I want to generate a result of 20 lines (depending on the quantity) with an auto generated column which will have a sequence of:
ITEM_TAG_001,
ITEM_TAG_002,
ITEM_TAG_003,
ITEM_TAG_004,
..
ITEM_TAG_020
Here's your sql query.
with cte as (
select 1 as ctr, t2.Qty, t1.EventID, t1.DocumentId, t1.EventDesc from tableA t1
inner join tableB t2 on t2.DocumentId = t1.DocumentId
union all
select ctr + 1, Qty, EventID, DocumentId, EventDesc from cte
where ctr <= Qty
)select *, concat('ITEM_TAG_', right('000'+ cast(ctr AS varchar(3)),3)) from cte
option (maxrecursion 0);
Output:
Best is to introduce a numbers table, very handsome in many places...
Something along:
Create some test data:
DECLARE #MockNumbers TABLE(Number BIGINT);
DECLARE #YourTable1 TABLE(DocumentID INT,ItemTag VARCHAR(100),SomeText VARCHAR(100));
DECLARE #YourTable2 TABLE(DocumentID INT, Qty INT);
INSERT INTO #MockNumbers SELECT TOP 100 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values;
INSERT INTO #YourTable1 VALUES(1,'FirstItem','qty 5'),(2,'SecondItem','qty 7');
INSERT INTO #YourTable2 VALUES(1,5), (2,7);
--The query
SELECT CONCAT(t1.ItemTag,'_',REPLACE(STR(A.Number,3),' ','0'))
FROM #YourTable1 t1
INNER JOIN #YourTable2 t2 ON t1.DocumentID=t2.DocumentID
CROSS APPLY(SELECT Number FROM #MockNumbers WHERE Number BETWEEN 1 AND t2.Qty) A;
The result
FirstItem_001
FirstItem_002
[...]
FirstItem_005
SecondItem_001
SecondItem_002
[...]
SecondItem_007
The idea in short:
We use an INNER JOIN to get the quantity joined to the item.
Now we use APPLY, which is a row-wise action, to bind as many rows to the set, as we need it.
The first item will return with 5 lines, the second with 7. And the trick with STR() and REPLACE() is one way to create a padded number. You might use FORMAT() (v2012+), but this is working rather slowly...
The table #MockNumbers is a declared table variable containing a list of numbers from 1 to 100. This answer provides an example how to create a pyhsical numbers and date table. Any database should have such a table...
If you don't want to create a numbers table, you can search for a tally table or tally on the fly. There are many answers showing approaches how to create a list of running numbers...a

INSERT multiple rows with SELECT and an array

In EXCEL/VBA I can program my way out of a thunderstorm, but in SQL I am still a novice. So apologies, after much Googling I can only get partway to a solution which I presume ultimately will be pretty simple, just not wrapping my head around it.
I need to create an INSERT script to add multiple rows in a 3-column table. A simple insert would be:
INSERT INTO table VALUES(StoreId, ItemID, 27)
First hurdle is dynamically repeat this for every StoreID in a different table. Which I think becomes this:
INSERT INTO table
SELECT (SELECT StoreID FROM Directory.Divisions), ItemID, 27)
If that is actually correct and would effectively create the 50-60 rows for each store, then I'm almost there. The problem is the ItemID. This will actually be an array of ItemIDs I want to feed in manually. So if there are 50 stores and 3 ItemIDs, it would enter 150 rows. Something like:
ItemID = (123,456,789,246,135)
So how can I merge these two ideas? Pull the StoreIDs from another table, feed in the array of items for the second parameter, then my hardcoded 27 at the end. 50 stores and 10 items should create 500 rows. Thanks in advance.
You can use into to insert into the target table. To generate itemid's you will have to use union all with your values and cross join on the divisions table.
select
d.storeid,
x.itemid,
27 as somecolumn
into targettablename
from Directory.Divisions d
cross join (select 123 as itemid union all select 456 union all select 789...) x
Edit: If the table to insert into isn't created yet, it should be created before inserting the data.
create table targettable as (store_id varchar(20), item_id varchar(20),
somecolumn int);
insert into targettable (store_id, item_id, somecolumn)
select
d.storeid,
x.itemid,
27
from Directory.Divisions d
cross join (select 123 as itemid union all select 456 union all select 789...) x
Firstly you need your array of item ids in a table of some sort. Either a permanent table, table variable or temporary table. For example using a temporary table, which you prefix with a hash symbol:
CREATE TABLE #ItemIds (item_id int)
INSERT INTO #ItemIds VALUES (1)
INSERT INTO #ItemIds VALUES (2)
...
INSERT INTO #ItemIds VALUES (10)
Then this should do the trick:
INSERT INTO table
SELECT StoreId, item_Id, 27
FROM Directory.Divisions, #ItemIds
The results set from the SELECT will be inserted into 'table'. This is an example of a cartesian join. Because there is no join condition, every row from Directory.Divisions is joined to every row in #ItemIds. Hence if you have 50 stores and 10 items, that will result in 50 x 10 = 500 rows.
You may declare table variable for item IDs and use CROSS JOIN to multiply division records to items: http://sqlfiddle.com/#!3/99438/1
create table Divisions(StoreId int)
insert into Divisions values (1), (2), (3)
declare #items table(ItemID int)
insert into #items values (5), (6), (7)
-- insert into target(stireid, itemid, otherColumn)
select d.StoreId, i.ItemID, 27
from Divisions d
cross join #items i

How to insert multiple records using single insert statement in Teradata?

In teradata, can we insert multiple records using single insert statement in query. If yes, how ?
Say I am trying to do something like:
insert test_rank (storeid,prodid,sales) values (1,'A',1000) ( 2,'B',2000) ,(3,'C',3000);
but this is not working in teradata to insert all 3 records in one statement.
INSERT INTO test_rank (storeid, prodid, sales)
SELECT *
FROM (SELECT *
FROM (SELECT 0 storeid,
1 prodid,
2 sales) T1
UNION ALL
SELECT *
FROM (SELECT 3 storeid,
4 prodid,
5 sales) T2
UNION ALL
SELECT *
FROM (SELECT 6 storeid,
7 prodid,
8 sales) T3
...
)T;
Thanks, Rob! Your advice helped me.
If you are dealing with small data, you can try putting the values inside a text file and import them with the Teradata SQLA.
Create a text file consisting your input, delimited by a tab (use comma if tab doesn't work in your version):
1 A 1000
2 B 2000
3 C 3000
Then select the import mode on your SQLA, File -> Import Data, and run this following statement:
insert into YourTable values (?, ?, ?);
Make sure you create the table beforehand, with the correct data types.
I'm not sure how practical this will be but technically this following is possible:
INSERT INTO MyTable
SELECT *
FROM
( SELECT 1 AS StoreID
, 'A' AS ProdID
, 1000 AS SALES
UNION
SELECT 2
, 'B'
, 2000
SELECT 3
, 'C'
, 3000
) DT1
;
Secondly, if you are using BTEQ then you can look into the USING command combined with a flat file repeat single INSERT statement to load the table. But at that point you might as well leverage a proper load utility (MultiLoad or FastLoad) depending on the volumes to accomplish this task if you are doing anything with reasonable volume.
Edit - 2015-12-10
The SQL above will not run unless each SELECT in the UNION is first placed in a derived table. See the answer below from Anatoly for the correct syntax.
Sometimes is useful to create table with data rather than try to make complicated dynamical insert.
CREATE TABLE db.Inc_Config AS (
SELECT
c.calendar_date,
null as Sent_To,
CURRENT_TIMESTAMP as Sent_Date,
date '2016-01-01' as Inc_Start_Date,
date '2016-02-29' as Inc_End_Date
FROM sys_calendar.CALENDAR c
WHERE
c.calendar_date BETWEEN date '2016-01-01' AND date '2016-02-29'
) WITH data;

Items not returned from a list

I have a list of items that is 860 items long. When i execute the query: select * from tableA where item in (... items ...) I get 858 items. I would like to know the 2 items in the list that are not in tableA.
NOT returns all of the items in the table that are not in the list, I want all the items in the list that are not in the table.
I would recommend that you convert your list into a temp table (there are a ton of udfs floating around that you can use ex: http://blog.sqlauthority.com/2007/05/06/sql-server-udf-function-to-convert-list-to-table/)
Once you have your temp table #List, you can do the following;
CREATE TABLE #List
(
[ListItem] INT
)
SELECT
*
FROM
#List AS l
LEFT OUTER JOIN
tableA AS t
ON
t.[Item] = l.[ListItem]
WHERE
t.[Item] IS NULL
See it in action: https://data.stackexchange.com/stackoverflow/query/61259/items-not-returned-from-a-list
Based on my original understanding of the question, I suggested to just add the keyword NOT
SELECT * FROM tableA WHERE item NOT IN (... items ...)
But as per the comment the above will not return what you want. The original question was edited to include this new infomration.
So, you need to get your data from your WHERE clause into a form that is queryable. Here is one way to do it where I create an additional table named "items" and have INSERT statements to place each item into this items table. Since I do not have access to your data, I am going to use integers for the items and set it up with a smaller amount of data.
--Set up some sample data
CREATE TABLE tableA(item INT PRIMARY KEY)
INSERT INTO tableA SELECT 1
INSERT INTO tableA SELECT 2
INSERT INTO tableA SELECT 3
INSERT INTO tableA SELECT 4
INSERT INTO tableA SELECT 9
INSERT INTO tableA SELECT 10
SELECT * FROM tableA WHERE item IN (0,1,2,3,4,5,6)
SELECT * FROM tableA WHERE item NOT IN (0,1,2,3,4,5,6)
-- Create a table and insert all the 860 items from your where clause
CREATE TABLE items(item INT)
INSERT INTO items SELECT 0
INSERT INTO items SELECT 1
INSERT INTO items SELECT 2
INSERT INTO items SELECT 3
INSERT INTO items SELECT 4
INSERT INTO items SELECT 5
INSERT INTO items SELECT 6
-- Want to find a query that returns all of the items in the newly created items table
-- that are not in the original tableA (in this example, the values returned are 0,5,6)
SELECT * FROM items WHERE item NOT IN (SELECT item FROM tableA)