I have T-SQL Table below.
ID Cost MaxCost
-------------------------------
2 200 300
3 400 1000
6 20 100
The above table must have 10 rows with IDs 1 to 10. So its missing 7 rows. How do i insert missing rows with proper ID. The cost & maxcost for missing rows will be zero. Do i need to create a temp table that holds 1 to 10 numbers?
No need for temp table, simple tally derived table and LEFT OUTER JOIN are sufficient:
CREATE TABLE #tab(ID INT, Cost INT, MaxCost INT);
INSERT INTO #tab(ID, Cost, MaxCost)
VALUES (2, 200,300),(3, 400, 1000) ,(6, 20, 100);
DECLARE #range_start INT = 1
,#range_end INT = 10;
;WITH tally AS
(
SELECT TOP 1000 r = ROW_NUMBER() OVER (ORDER BY name)
FROM master..spt_values
)
INSERT INTO #tab(id, Cost, MaxCost)
SELECT t.r, 0, 0
FROM tally t
LEFT JOIN #tab c
ON t.r = c.ID
WHERE t.r BETWEEN #range_start AND #range_end
AND c.ID IS NULL;
SELECT *
FROM #tab
ORDER BY ID;
LiveDemo
EDIT:
Tally table is simply number table. There are many ways to achieve it with subquery:
recursive cte
ROW_NUMBER() from system table that holds many values (used here)
UNION ALL and CROSS JOIN
VALUES(...)
using OPENJSON (SQL Server 2016+)
...
The TOP 1000 will generate only 1000 records if you know that you need more you can use:
SELECT TOP 1000000 r = ROW_NUMBER() OVER (ORDER BY (SELECT 1))
FROM master..spt_values c
CROSS JOIN master..spt_values c2;
Since your number of rows is low you could just define the data explicitly...
CREATE TABLE Data(ID INT, Cost INT, MaxCost INT);
INSERT INTO Data(ID, Cost, MaxCost) VALUES(2, 200, 300);
INSERT INTO Data(ID, Cost, MaxCost) VALUES(3, 400, 1000);
INSERT INTO Data(ID, Cost, MaxCost) VALUES(6, 20, 100);
and the query...
select *
from (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) RowNums (ID)
left outer join Data on RowNums.ID = Data.ID
The first part defines a column ID with rows 1-10, it then left outer joins to your data. The beauty of this is that it is very readable.
I like to google for new and better ways to do things.. so i stumbled over this post and...Well what worked good in SQL7 and works good in SQL2016 is to just use a outer join and look for NULL values(null is missing data) ....
insert into DestTable (keyCol1,col1,col2,col3...)
select keyCol1,col1,col2,col3,...)
from SouceTable as s
left outer join DestTable as d on d.KeyCol1=s.KeyCol1
where d.KeyCol1 is null
and ...
feel free to test it
wrap your statement in a transaction, delete a few rows and see them come back in the select statement that would normally insert the rows in the destination table...
BEGIN TRAN
--> delete a subset of the data, in this case 5 rows
set rowcount 5;
-->delete and show what is deleted
delete from DestTable;
OUTPUT deleted.*,'DELETD' as [Action]
--> Perform the select to see if the selected rows that are retured match the deleted rows
--insert into DestTable (keyCol1,col1,col2,col3...)
Select keyCol1,col1,col2,col3,...)
from SouceTable as s
left outer join DestTable as d on d.KeyCol1=s.KeyCol1
where d.KeyCol1 is null
and ...
ROLLBACK
another way would be a TSQL merge, google that if you need to also update and optionally delete...
Related
I have the following table:
EventID=00002,DocumentID=0005,EventDesc=ItemsReceived
I have the quantity in another table
DocumentID=0005,Qty=20
I want to generate a result of 20 lines (depending on the quantity) with an auto generated column which will have a sequence of:
ITEM_TAG_001,
ITEM_TAG_002,
ITEM_TAG_003,
ITEM_TAG_004,
..
ITEM_TAG_020
Here's your sql query.
with cte as (
select 1 as ctr, t2.Qty, t1.EventID, t1.DocumentId, t1.EventDesc from tableA t1
inner join tableB t2 on t2.DocumentId = t1.DocumentId
union all
select ctr + 1, Qty, EventID, DocumentId, EventDesc from cte
where ctr <= Qty
)select *, concat('ITEM_TAG_', right('000'+ cast(ctr AS varchar(3)),3)) from cte
option (maxrecursion 0);
Output:
Best is to introduce a numbers table, very handsome in many places...
Something along:
Create some test data:
DECLARE #MockNumbers TABLE(Number BIGINT);
DECLARE #YourTable1 TABLE(DocumentID INT,ItemTag VARCHAR(100),SomeText VARCHAR(100));
DECLARE #YourTable2 TABLE(DocumentID INT, Qty INT);
INSERT INTO #MockNumbers SELECT TOP 100 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values;
INSERT INTO #YourTable1 VALUES(1,'FirstItem','qty 5'),(2,'SecondItem','qty 7');
INSERT INTO #YourTable2 VALUES(1,5), (2,7);
--The query
SELECT CONCAT(t1.ItemTag,'_',REPLACE(STR(A.Number,3),' ','0'))
FROM #YourTable1 t1
INNER JOIN #YourTable2 t2 ON t1.DocumentID=t2.DocumentID
CROSS APPLY(SELECT Number FROM #MockNumbers WHERE Number BETWEEN 1 AND t2.Qty) A;
The result
FirstItem_001
FirstItem_002
[...]
FirstItem_005
SecondItem_001
SecondItem_002
[...]
SecondItem_007
The idea in short:
We use an INNER JOIN to get the quantity joined to the item.
Now we use APPLY, which is a row-wise action, to bind as many rows to the set, as we need it.
The first item will return with 5 lines, the second with 7. And the trick with STR() and REPLACE() is one way to create a padded number. You might use FORMAT() (v2012+), but this is working rather slowly...
The table #MockNumbers is a declared table variable containing a list of numbers from 1 to 100. This answer provides an example how to create a pyhsical numbers and date table. Any database should have such a table...
If you don't want to create a numbers table, you can search for a tally table or tally on the fly. There are many answers showing approaches how to create a list of running numbers...a
I have a table and i would like to replicate/clone records within the same table. However i would like to do that with a condition. And the condition is i have a column called recordcount with numeric values. For example Row 1 can take on a value of recordcount say 7, then i would like my row 1 to be replicated 7 times. Row 2 could take on a value say 9 then i would like row 2 to be replicated 9 times.
Any help is appreciated. Thank you
What you can do (and I'm pretty sure it's not a best practice),
Is to hold a table with just numbers, which has rowcount that correspond to the numeric value.
Join that with your table, and project your table only.
Example:
create table nums(x int);
insert into nums select 1;
insert into nums select 2;
insert into nums select 2;
insert into nums select 3;
insert into nums select 3;
insert into nums select 3;
create table t (txt varchar(10) , recordcount int);
insert into t select 'A',1;
insert into t select 'B',2;
insert into t select 'C',3;
select t.*
from t
inner join nums
on t.recordcount = nums.x
order by 1
;
Will project:
"A",1
"B",2
"B",2
"C",3
"C",3
"C",3
In EXCEL/VBA I can program my way out of a thunderstorm, but in SQL I am still a novice. So apologies, after much Googling I can only get partway to a solution which I presume ultimately will be pretty simple, just not wrapping my head around it.
I need to create an INSERT script to add multiple rows in a 3-column table. A simple insert would be:
INSERT INTO table VALUES(StoreId, ItemID, 27)
First hurdle is dynamically repeat this for every StoreID in a different table. Which I think becomes this:
INSERT INTO table
SELECT (SELECT StoreID FROM Directory.Divisions), ItemID, 27)
If that is actually correct and would effectively create the 50-60 rows for each store, then I'm almost there. The problem is the ItemID. This will actually be an array of ItemIDs I want to feed in manually. So if there are 50 stores and 3 ItemIDs, it would enter 150 rows. Something like:
ItemID = (123,456,789,246,135)
So how can I merge these two ideas? Pull the StoreIDs from another table, feed in the array of items for the second parameter, then my hardcoded 27 at the end. 50 stores and 10 items should create 500 rows. Thanks in advance.
You can use into to insert into the target table. To generate itemid's you will have to use union all with your values and cross join on the divisions table.
select
d.storeid,
x.itemid,
27 as somecolumn
into targettablename
from Directory.Divisions d
cross join (select 123 as itemid union all select 456 union all select 789...) x
Edit: If the table to insert into isn't created yet, it should be created before inserting the data.
create table targettable as (store_id varchar(20), item_id varchar(20),
somecolumn int);
insert into targettable (store_id, item_id, somecolumn)
select
d.storeid,
x.itemid,
27
from Directory.Divisions d
cross join (select 123 as itemid union all select 456 union all select 789...) x
Firstly you need your array of item ids in a table of some sort. Either a permanent table, table variable or temporary table. For example using a temporary table, which you prefix with a hash symbol:
CREATE TABLE #ItemIds (item_id int)
INSERT INTO #ItemIds VALUES (1)
INSERT INTO #ItemIds VALUES (2)
...
INSERT INTO #ItemIds VALUES (10)
Then this should do the trick:
INSERT INTO table
SELECT StoreId, item_Id, 27
FROM Directory.Divisions, #ItemIds
The results set from the SELECT will be inserted into 'table'. This is an example of a cartesian join. Because there is no join condition, every row from Directory.Divisions is joined to every row in #ItemIds. Hence if you have 50 stores and 10 items, that will result in 50 x 10 = 500 rows.
You may declare table variable for item IDs and use CROSS JOIN to multiply division records to items: http://sqlfiddle.com/#!3/99438/1
create table Divisions(StoreId int)
insert into Divisions values (1), (2), (3)
declare #items table(ItemID int)
insert into #items values (5), (6), (7)
-- insert into target(stireid, itemid, otherColumn)
select d.StoreId, i.ItemID, 27
from Divisions d
cross join #items i
I have two tables I want to join.
Table A has one column, named "Week", and contains 52 rows: 1,2,3,4,5,6 etc.
Table 2 has three columns, named "Name", "Week", and "Total", and contains 10 rows:
'Bob', 1, 1
'Bob', 3, 1
'Joe', 4, 1
'Bob', 6, 1
I want to join these together so that my data looks like:
NAME|WEEK|TOTAL
'Bob', 1, 1
'Bob', 2, 0
'Bob', 3, 1
'Bob', 4, 0
'Bob', 5, 0
'Bob', 6, 1
As you can see, a simple outer join. However, when I try to do this, I'm not getting the expected result, no matter what kind of join I use.
My query below:
SELECT a.WEEK, b.Total
FROM Weeks a LEFT JOIN Totals b ON (a.Week = b.Week and b.Name ='Bob')
The result of this query is
NAME|WEEK|TOTAL
'Bob', 1, 1
'Bob', 3, 1
'Bob', 6, 1
Thanks in advance for the help!
I know its access but your join is incorrect. Here we go in sql server..same concept just look at the join condition:
--dont worry about this code im just creating some temp tables
--table to store one column (mainly week number 1,2..52)
CREATE TABLE #Weeks
(
weeknumber int
)
--insert some test data
--week numbers...I'll insert some for you
INSERT INTO #Weeks(weeknumber) VALUES(1)
INSERT INTO #Weeks(weeknumber) VALUES(2)
INSERT INTO #Weeks(weeknumber) VALUES(3)
INSERT INTO #Weeks(weeknumber) VALUES(4)
INSERT INTO #Weeks(weeknumber) VALUES(5)
INSERT INTO #Weeks(weeknumber) VALUES(6)
--create another table with two columns storing the week # and a total for that week
CREATE TABLE #Table2
(
weeknumber int,
total int
)
--insert some data
INSERT INTO #Table2(weeknumber, total) VALUES(1, 100)
--notice i skipped week 2 on purpose to show you the results
INSERT INTO #Table2(weeknumber, total) VALUES(3, 100)
--here's the magic
SELECT t1.weeknumber as weeknumber, ISNULL(t2.total,0) as total FROM
#Weeks t1 LEFT JOIN #Table2 t2 ON t1.weeknumber=t2.weeknumber
--get rid of the temp tables
DROP TABLE #table2
DROP TABLE #Weeks
Results:
1 100
2 0
3 100
4 0
5 0
6 0
Take your week number table (the table that has one column:
SELECT t1.weeknumber as weeknumber
Add to it a null check to replace the null value with a 0. I think there is something in access like ISNULL:
ISNULL(t2.total, 0) as total
And start your join from your first table and left join to your second table on the weeknumber field. The result is simple:
SELECT t1.weeknumber as weeknumber, ISNULL(t2.total,0) as total FROM
#Weeks t1 LEFT JOIN #Table2 t2 ON t1.weeknumber=t2.weeknumber
Do not pay attention to all the other code I have posted, that is only there to create temp tables and insert values into the tables.
SELECT b.Name, b.Week, b.Total
FROM Totals AS b
WHERE b.Name ='Bob'
UNION
SELECT 'Bob' AS Name, a.Week, 0 AS Total
FROM Weeks AS a
WHERE NOT EXISTS ( SELECT *
FROM Totals AS b
WHERE a.Week = b.Week
AND b.Name ='Bob' );
You were on the right track, but just needed to use a left join. Also the NZ function will put a 0 if total is null.
SELECT Totals.Person, Weeks.WeekNo, Nz(Totals.total, 0) as TotalAmount
FROM Weeks LEFT JOIN Totals
ON (Weeks.WeekNo = Totals.weekno and Totals.Person = 'Bob');
EDIT: The query you now have won't even give the results you've shown because you left out the Name field (Which is a bad name for a field because it is a reserved word.). You're still not providing all the information. This query works.
*Another Approach: * Create a separate query on the Totals table having a where clause: Name = 'Bob'
Select Name, WeekNo, Total From Totals Where Name = 'Bob';
and substitute that query for the Totals table in this query.
Select b.Name, w.WeekNo, b.total
from Weeks as w
LEFT JOIN qryJustBob as b
on .WeekNo = b.WeekNo;
I need to test my mail server. How can I make a Select statement
that selects say ID=5469 a thousand times.
If I get your meaning then a very simple way is to cross join on a derived query on a table with more than 1000 rows in it and put a top 1000 on that. This would duplicate your results 1000 times.
EDIT: As an example (This is MSSQL, I don't know if Access is much different)
SELECT
MyTable.*
FROM
MyTable
CROSS JOIN
(
SELECT TOP 1000
*
FROM
sysobjects
) [BigTable]
WHERE
MyTable.ID = 1234
You can use the UNION ALL statement.
Try something like:
SELECT * FROM tablename WHERE ID = 5469
UNION ALL
SELECT * FROM tablename WHERE ID = 5469
You'd have to repeat the SELECT statement a bunch of times but you could write a bit of VB code in Access to create a dynamic SQL statement and then execute it. Not pretty but it should work.
Create a helper table for this purpose:
JUST_NUMBER(NUM INT primary key)
Insert (with the help of some (VB) script) numbers from 1 to N. Then execute this unjoined query:
SELECT MYTABLE.*
FROM MYTABLE,
JUST_NUMBER
WHERE MYTABLE.ID = 5469
AND JUST_NUMBER.NUM <= 1000
Here's a way of using a recursive common table expression to generate some empty rows, then to cross join them back onto your desired row:
declare #myData table (val int) ;
insert #myData values (666),(888),(777) --some dummy data
;with cte as
(
select 100 as a
union all
select a-1 from cte where a>0
--generate 100 rows, the max recursion depth
)
,someRows as
(
select top 1000 0 a from cte,cte x1,cte x2
--xjoin the hundred rows a few times
--to generate 1030301 rows, then select top n rows
)
select m.* from #myData m,someRows where m.val=666
substitute #myData for your real table, and alter the final predicate to suit.
easy way...
This exists only one row into the DB
sku = 52 , description = Skullcandy Inkd Green ,price = 50,00
Try to relate another table in which has no constraint key to the main table
Original Query
SELECT Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod WHERE Prod_SKU = N'52'
The Functional Query ...adding a not related table called 'dbo.TB_Labels'
SELECT TOP ('times') Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod,dbo.TB_Labels WHERE Prod_SKU = N'52'
In postgres there is a nice function called generate_series. So in postgreSQL it is as simple as:
select information from test_table, generate_series(1, 1000) where id = 5469
In this way, the query is executed 1000 times.
Example for postgreSQL:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; --To be able to use function uuid_generate_v4()
--Create a test table
create table test_table (
id serial not null,
uid UUID NOT NULL,
CONSTRAINT uid_pk PRIMARY KEY(id));
-- Insert 10000 rows
insert into test_table (uid)
select uuid_generate_v4() from generate_series(1, 10000);
-- Read the data from id=5469 one thousand times
select id, uid, uuid_generate_v4() from test_table, generate_series(1, 1000) where id = 5469;
As you can see in the result below, the data from uid is read 1000 times as confirmed by the generation of a new uuid at every new row.
id |uid |uuid_generate_v4
----------------------------------------------------------------------------------------
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5630cd0d-ee47-4d92-9ee3-b373ec04756f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"ed44b9cb-c57f-4a5b-ac9a-55bd57459c02"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"3428b3e3-3bb2-4e41-b2ca-baa3243024d9"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7c8faf33-b30c-4bfa-96c8-1313a4f6ce7c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"b589fd8a-fec2-4971-95e1-283a31443d73"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"8b9ab121-caa4-4015-83f5-0c2911a58640"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7ef63128-b17c-4188-8056-c99035e16c11"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5bdc7425-e14c-4c85-a25e-d99b27ae8b9f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"9bbd260b-8b83-4fa5-9104-6fc3495f68f3"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"c1f759e1-c673-41ef-b009-51fed587353c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"4a70bf2b-ddf5-4c42-9789-5e48e2aec441"
Of course other DBs won't necessarily have the same function but it could be done:
See here.
If your are doing this in sql Server
declare #cnt int
set #cnt = 0
while #cnt < 1000
begin
select '12345'
set #cnt = #cnt + 1
end
select '12345' can be any expression
Repeat rows based on column value of TestTable. First run the Create table and insert statement, then run the following query for the desired result.
This may be another solution:
CREATE TABLE TestTable
(
ID INT IDENTITY(1,1),
Col1 varchar(10),
Repeats INT
)
INSERT INTO TESTTABLE
VALUES ('A',2), ('B',4),('C',1),('D',0)
WITH x AS
(
SELECT TOP (SELECT MAX(Repeats)+1 FROM TestTable) rn = ROW_NUMBER()
OVER (ORDER BY [object_id])
FROM sys.all_columns
ORDER BY [object_id]
)
SELECT * FROM x
CROSS JOIN TestTable AS d
WHERE x.rn <= d.Repeats
ORDER BY Col1;
This trick helped me in my requirement.
here, PRODUCTDETAILS is my Datatable
and orderid is my column.
declare #Req_Rows int = 12
;WITH cte AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number + 1 FROM cte WHERE Number < #Req_Rows
)
SELECT PRODUCTDETAILS.*
FROM cte, PRODUCTDETAILS
WHERE PRODUCTDETAILS.orderid = 3
create table #tmp1 (id int, fld varchar(max))
insert into #tmp1 (id, fld)
values (1,'hello!'),(2,'world'),(3,'nice day!')
select * from #tmp1
go
select * from #tmp1 where id=3
go 1000
drop table #tmp1
in sql server try:
print 'wow'
go 5
output:
Beginning execution loop
wow
wow
wow
wow
wow
Batch execution completed 5 times.
The easy way is to create a table with 1000 rows. Let's call it BigTable. Then you would query for the data you want and join it with the big table, like this:
SELECT MyTable.*
FROM MyTable, BigTable
WHERE MyTable.ID = 5469