My database schema looks like this:
Table t1:
id
valA
valB
Table t2:
id
valA
valB
What I want to do, is, for a given set of rows in one of these tables, find rows in both tables that have the same valA or valB (comparing valA with valA and valB with valB, not valA with valB). Then, I want to look for rows with the same valA or valB as the rows in the result of the previous query, and so on.
Example data:
t1 (id, valA, valB):
1, a, B
2, b, J
3, d, E
4, d, B
5, c, G
6, h, J
t2 (id, valA, valB):
1, b, E
2, d, H
3, g, B
Example 1:
Input: Row 1 in t1
Output:
t1/4, t2/3
t1/3, t2/2
t2/1
...
Example 2:
Input: Row 6 in t1
Output:
t1/2
t2/1
I would like to have the level of the search at that the row was found in the result (e.g. in Example 1: Level 1 for t1/2 and t2/1, level 2 for t1/5, ...) A limited depth of recursion is okay. Over time, I maybe want to include more tables following the same schema into the query. It would be nice if it was easy to extend the query for that purpose.
But what matters most, is the performance. Can you tell me the fastest possible way to accomplish this?
Thanks in advance!
try this although it's not fully tested but looked like it was working :P (http://pastie.org/1140339)
drop table if exists t1;
create table t1
(
id int unsigned not null auto_increment primary key,
valA char(1) not null,
valB char(1) not null
)
engine=innodb;
drop table if exists t2;
create table t2
(
id int unsigned not null auto_increment primary key,
valA char(1) not null,
valB char(1) not null
)
engine=innodb;
drop view if exists t12;
create view t12 as
select 1 as tid, id, valA, valB from t1
union
select 2 as tid, id, valA, valB from t2;
insert into t1 (valA, valB) values
('a','B'),
('b','J'),
('d','E'),
('d','B'),
('c','G'),
('h','J');
insert into t2 (valA, valB) values
('b','E'),
('d','H'),
('g','B');
drop procedure if exists find_children;
delimiter #
create procedure find_children
(
in p_tid tinyint unsigned,
in p_id int unsigned
)
proc_main:begin
declare done tinyint unsigned default 0;
declare dpth smallint unsigned default 0;
create temporary table children(
tid tinyint unsigned not null,
id int unsigned not null,
valA char(1) not null,
valB char(1) not null,
depth smallint unsigned default 0,
primary key (tid, id, valA, valB)
)engine = memory;
insert into children select p_tid, t.id, t.valA, t.valB, dpth from t12 t where t.tid = p_tid and t.id = p_id;
create temporary table tmp engine=memory select * from children;
/* http://dec.mysql.com/doc/refman/5.0/en/temporary-table-problems.html */
while done <> 1 do
if exists(
select 1 from t12 t
inner join tmp on tmp.valA = t.valA or tmp.valB = t.valB and tmp.depth = dpth) then
insert ignore into children
select
t.tid, t.id, t.valA, t.valB, dpth+1
from t12 t
inner join tmp on tmp.valA = t.valA or tmp.valB = t.valB and tmp.depth = dpth;
set dpth = dpth + 1;
truncate table tmp;
insert into tmp select * from children where depth = dpth;
else
set done = 1;
end if;
end while;
select * from children order by depth;
drop temporary table if exists children;
drop temporary table if exists tmp;
end proc_main #
delimiter ;
call find_children(1,1);
call find_children(1,6);
You can do it with stored procedures (see listings 7 and 7a):
http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html
You just need to figure out a query for the step of the recursion - taking the already-found rows and finding some more rows.
If you had a database which supported SQL-99 recursive common table expressions (like PostgreSQL or Firebird, hint hint), you could take the same approach as in the above link, but using a rCTE as the framework, so avoiding the need to write a stored procedure.
EDIT: I had a go at doing this with an rCTE in PostgreSQL 8.4, and although i can find the rows, i can't find a way to label them with the depth at which they were found. First, i create a a view to unify the tables:
create view t12 (tbl, id, vala, valb) as (
(select 't1', id, vala, valb from t1)
union
(select 't2', id, vala, valb from t2)
)
Then do this query:
with recursive descendants (tbl, id, vala, valb) as (
(select *
from t12
where tbl = 't1' and id = 1) -- the query that identifies the seed rows, here just t1/1
union
(select c.*
from descendants p, t12 c
where (p.vala = c.vala or p.valb = c.valb)) -- the recursive term
)
select * from descendants;
You would imagine that capturing depth would be as simple as adding a depth column to the rCTE, set to zero in the seed query, then somehow incremented in the recursive step. However, i couldn't find any way to do that, given that you can't write subqueries against the rCTE in the recursive step (so nothing like select max(depth) + 1 from descendants in the column list), and you can't use an aggregate function in the column list (so no max(p.depth) + 1 in the column list coupled with a group by c.* on the select).
You would also need to add a restriction to the query to exclude already-selected rows; you don't need to do that in the basic version, because of the distincting effect of the union, but if you add a count column, then a row can be included in the results more than once with different counts, and you'll get a Cartesian explosion. But you can't easily prevent it, because you can't have subqueries against the rCTE, which means you can't say anything like and not exists (select * from descendants d where d.tbl = c.tbl and d.id = c.id)!
I know all this stuff about recursive queries is of no use to you, but i find it riveting, so please do excuse me.
Related
Suppose a table with 3 columns. each row represents a unique combination of each value:
a a a
a a b
a b a
b b a
b b c
c c a
...
however, what I want is,
aab = baa = aba
cca = cac = acc
...
Finally, I want to get these values in a CSV format as a combination for each value like the image that I attached.
Thanks for your help!
Below is the query to generate my problem, please take a look!
--=======================================
--populate test data
--=======================================
drop table if exists #t0
;
with
cte_tally as
(
select row_number() over (order by (select 1)) as n
from sys.all_columns
)
select
char(n) as alpha
into #t0
from
cte_tally
where
(n > 64 and n < 91) or
(n > 96 and n < 123);
drop table if exists #t1
select distinct upper(alpha) alpha into #t1 from #t0
drop table if exists #t2
select
a.alpha c1
, b.alpha c2
, c.alpha c3
, row_number()over(order by (select 1)) row_num
into #t2
from #t1 a
join #t1 b on 1=1
join #t1 c on 1=1
drop table if exists #t3
select *
into #t3
from (
select *
from #t2
) p
unpivot
(cvalue for c in (c1,c2,c3)
) unpvt
select
row_num
, c
, cvalue
from #t3
order by 1,2
--=======================================
--these three rows should be treated equally
--=======================================
select *
from #t2
where concat(c1,c2,c3) in ('ABA','AAB', 'BAA')
--=======================================
--what i've tried...
--row count is actually correct, but the problem is that it ommits where there're any duplicate alphabet.
--=======================================
select
distinct
stuff((
select
distinct
'.' + cvalue
from #t3 a
where a.row_num = h.row_num
for xml path('')
),1,1,'') as comb
from #t3 h
As pointed out in the comments, you can unpivot the values, sort them in the right order and reaggregate them into a single row. Then you can group the original rows by those new values.
SELECT *
FROM #t2
CROSS APPLY (
SELECT a = MIN(val), b = MIN(CASE WHEN rn = 2 THEN val), c = MAX(val)
FROM (
SELECT *, rn = ROW_NUMBER() OVER (ORDER BY val)
FROM (VALUES (c1),(c2),(c3) ) v3(val)
) v2
) v
GROUP BY v.a, v.b, v.c;
Really, what you should perhaps do, is ensure that the values are in the correct order in the first place:
ALTER TABLE #t2
ADD CONSTRAINT t2_ValuesOrder
CHECK (c1 <= c2 AND c2 <= c3);
Would be curious why, sure you have a reason. Might suggest having a lookup table, holding all associated keys to a "Mapping Table". You might optimize some of this as you implement it. First create one table for holding the "Next/New Key" (this is where the 1, 2, 3...) come from. You get a new "New Key" after each batch of records you bulk insert into your "Mapping Table". The "Mapping Table" holds the combination of the key values, one row for each combinations along with your "New Key" Should get a table looking something like:
A, B, C, 1
A, C, B, 1
B, A, C, 1
...
X, Y, Z, 2
X, Z, Y, 2
If you can update your source table to hold a column for your "Mapping Key" (the 1,2,3) then you just look up from the mapping table where (c1=a, c2=a, c3=b) order for this look-up shouldn't matter. One suggestion would create a composite unique key using c1,c2,c3 on your mapping table. Then to get your records just look up the "mapping key value" from the mapping table and then query for records matching the mapping key value. Or, if you don't do a pre-lookup to get the mapping key you should be able to do a self-join using the mapping key value...
If you want them in a CSV format:
select distinct v.cs
from #t2 t2 cross apply
(select string_agg(c order by c desc, ',') as cs
from (values (t2.c1), (t2.c2), (t2.c3)
) v(c)
) v;
It seems to me that what you need is some form of masking*. Take this fiddle:
http://sqlfiddle.com/#!18/fc67f/8
where I have created a mapping table that contains all of the possible values and paired that with increasing orders of 10. Doing a cross join on that map table, concatenating the values, adding the masks and grouping on the total will yield you all the unique combinations.
Here is the code from the fiddle:
CREATE TABLE maps (
val varchar(1),
num int
);
INSERT INTO maps (val, num) VALUES ('a', 1), ('b', 10), ('c', 100);
SELECT mask, max(vals) as val
FROM (
SELECT concat(m1.val, m2.val, m3.val) as vals,
m1.num + m2.num + m3.num as mask
FROM maps m1
CROSS JOIN maps m2
CROSS JOIN maps m3
) q GROUP BY mask
Using these values of 10 will ensure that mask contains the count for each value, one for each place column in the resulting number, and then you can group on it to get the unique(ish) strings.
I don't know what your data looks like, and if you have more than 10 possible values then you will have to use some other base than 10, but the theory should still apply. I didn't write code to extract the columns from the value table into the mapping table, but I'm sure you can do that.
*actually, I think the term I was looking for was flag.
I am having a live data table in which the old values are placed,in a new table i am moving data from that live table to this one how to find updated or new records that are inserted or updated in new table with out using except,checksum(binary_checksum) and join ,i am looking for a solution using System Defined Function.
The requirement is interesting as the best solutions are to use EXCEPT or a FULL JOIN. What you are trying to do is what is referred to as an left anti semi join. Here's a good article about the topic.
Note this sample data and the solutions (note that my solution that does not use EXCEPT or a join is the last solution):
-- sample data
if object_id('tempdb.dbo.orig') is not null drop table dbo.orig;
if object_id('tempdb.dbo.new') is not null drop table dbo.new;
create table dbo.orig (someid int, col1 int, constraint uq_cl_orig unique (someid, col1));
create table dbo.new (someid int, col1 int, constraint uq_cl_new unique (someid, col1));
insert dbo.orig values (1,100),(2,110),(3,120),(4,2000)
insert dbo.new values (1,100),(2,110),(3,122),(5,999);
Here's the EXCEPT version
select someid
from
(
select * from dbo.new except
select * from dbo.orig
) n
union -- union "distict"
select someid
from
(
select * from dbo.orig except
select * from dbo.new
) o;
Here's a FULL JOIN Solution which will also tell you if the record was removed, changed or added:
select
someid = isnull(n.someid, o.someid),
[status] =
case
when count(isnull(n.someid, o.someid)) > 1 then 'changed'
when max(n.col1) is null then 'removed' else 'added'
end
from dbo.new n
full join dbo.orig o
on n.col1=o.col1 and n.someid = o.someid
where n.col1 is null or o.col1 is null
group by isnull(n.someid, o.someid);
But, because those efficient solutions are not an option - you will need to go with a NOT IN or NOT EXISTS subquery.... And because it has to be a function, I am encapsulating the logic into a function.
create function dbo.newOrChangedOrRemoved()
returns table as return
-- get the new records
select someid, [status] = 'new'
from dbo.new n
where n.someid not in (select someid from dbo.orig)
union all
-- get the removed records
select someid, 'removed'
from dbo.orig o
where o.someid not in (select someid from dbo.new)
union all
-- get the changed records
select someid, 'changed'
from dbo.orig o
where exists
(
select *
from dbo.new n
where o.someid = n.someid and o.col1 <> n.col1
);
Results:
someid status
----------- -------
5 new
4 removed
3 changed
I have 2 tables as such
Table ErrorCodes:
type_code desc
01 Error101
02 Error99
03 Error120
Table ErrorXML:
row_index typeCode
1 87
2 02
3 01
The output should be the description(column desc) of the first matched type_code between the 2 tables
Expected output : Error99
I have gotten so far.
select isnull(descript, 'unknown') as DESCRIPTION
from (select top 1 a.stmt_cd as descript
from ErrorCodes a, ErrorXML b
where a.type_cd = b.typecode
order by b.row_index)
But this query doesn't return the string UNKNOWN when there is no common typecode (join condition) between the 2 tables. In this case, im getting null.
How can I resolve this?
This is an interesting question. I believe the following can be an intuitive and beautiful solution (I used desc_ as column name rather than desc which is a reserved word):
select (select desc_ from ErrorCodes x where x.type_code = a.typeCode) desc_
from ErrorXML a
where (select desc_ from ErrorCodes x where x.type_code = a.typeCode) is not null
order by row_index
limit 1;
If you also need to handle the case if query returns no row then for MySQL, following syntax should suffice. For other databases you can use similar encapsulation with isnull, nvl, etc:
select ifnull((select (select desc_ from ErrorCodes x where x.type_code = a.typeCode) desc_ from ErrorXML a where (select desc_ from ErrorCodes x where x.type_code = a.typeCode) is not null order by row_index limit 1), 'UNKNOWN');
To test I used following scripts and seems to work properly:
create database if not exists stackoverflow;
use stackoverflow;
drop table if exists ErrorCodes;
create table ErrorCodes
(
type_code varchar(2),
desc_ varchar(10)
);
insert into ErrorCodes(type_code, desc_) values
('01', 'Error101'),
('02', 'Error99'),
('03', 'Error120');
drop table if exists ErrorXML;
create table ErrorXML
(
row_index integer,
typeCode varchar(2)
);
insert into ErrorXML(row_index, typeCode) values
('1', '87'),
('2', '02'),
('3', '01');
Final-1 quote: While generating your tables try to use same column names as much as possible. I.e. I'd suggest ErrorXML to use type_code rather than typeCode.
Final quote: I choose to use lower letters in SQL since capital letters should be used while emphasizing an important point. I also suggest that style.
What about this: Do a subquery to bring back the first row_index for each type_code.
Do a LEFT OUTER Join on the ErrorCodes table so that you get NULLs as well.
SELECT
ISNULL(ErrorCodes.desc,'unknown') AS description
ErrorXML.row_index
FROM ErrorCodes
LEFT OUTER JOIN (
SELECT type_code, MIN(row_index) AS row_index
FROM ErrorXML
GROUP BY type_code
) AS ErrorXML ON ErrorCodes.type_code = ErrorXML .type_code
I'm writing a function in node.js to query a PostgreSQL table.
If the row exists, I want to return the id column from the row.
If it doesn't exist, I want to insert it and return the id (insert into ... returning id).
I've been trying variations of case and if else statements and can't seem to get it to work.
A solution in a single SQL statement. Requires PostgreSQL 8.4 or later though.
Consider the following demo:
Test setup:
CREATE TEMP TABLE tbl (
id serial PRIMARY KEY
,txt text UNIQUE -- obviously there is unique column (or set of columns)
);
INSERT INTO tbl(txt) VALUES ('one'), ('two');
INSERT / SELECT command:
WITH v AS (SELECT 'three'::text AS txt)
,s AS (SELECT id FROM tbl JOIN v USING (txt))
,i AS (
INSERT INTO tbl (txt)
SELECT txt
FROM v
WHERE NOT EXISTS (SELECT * FROM s)
RETURNING id
)
SELECT id, 'i'::text AS src FROM i
UNION ALL
SELECT id, 's' FROM s;
The first CTE v is not strictly necessary, but achieves that you have to enter your values only once.
The second CTE s selects the id from tbl if the "row" exists.
The third CTE i inserts the "row" into tbl if (and only if) it does not exist, returning id.
The final SELECT returns the id. I added a column src indicating the "source" - whether the "row" pre-existed and id comes from a SELECT, or the "row" was new and so is the id.
This version should be as fast as possible as it does not need an additional SELECT from tbl and uses the CTEs instead.
To make this safe against possible race conditions in a multi-user environment:
Also for updated techniques using the new UPSERT in Postgres 9.5 or later:
Is SELECT or INSERT in a function prone to race conditions?
I would suggest doing the checking on the database side and just returning the id to nodejs.
Example:
CREATE OR REPLACE FUNCTION foo(p_param1 tableFoo.attr1%TYPE, p_param2 tableFoo.attr1%TYPE) RETURNS tableFoo.id%TYPE AS $$
DECLARE
v_id tableFoo.pk%TYPE;
BEGIN
SELECT id
INTO v_id
FROM tableFoo
WHERE attr1 = p_param1
AND attr2 = p_param2;
IF v_id IS NULL THEN
INSERT INTO tableFoo(id, attr1, attr2) VALUES (DEFAULT, p_param1, p_param2)
RETURNING id INTO v_id;
END IF;
RETURN v_id:
END;
$$ LANGUAGE plpgsql;
And than on the Node.js-side (i'm using node-postgres in this example):
var pg = require('pg');
pg.connect('someConnectionString', function(connErr, client){
//do some errorchecking here
client.query('SELECT id FROM foo($1, $2);', ['foo', 'bar'], function(queryErr, result){
//errorchecking
var id = result.rows[0].id;
};
});
Something like this, if you are on PostgreSQL 9.1
with test_insert as (
insert into foo (id, col1, col2)
select 42, 'Foo', 'Bar'
where not exists (select * from foo where id = 42)
returning foo.id, foo.col1, foo.col2
)
select id, col1, col2
from test_insert
union
select id, col1, col2
from foo
where id = 42;
It's a bit longish and you need to repeat the id to test for several times, but I can't think of a different solution that involves a single SQL statement.
If a row with id=42 exists, the writeable CTE will not insert anything and thus the existing row will be returned by the second union part.
When testing this I actually thought the new row would be returned twice (therefor a union not a union all) but it turns out that the result of the second select statement is actually evaluated before the whole statement is run and it does not see the newly inserted row. So in case a new row is inserted, it will be taken from the "returning" part.
create table t (
id serial primary key,
a integer
)
;
insert into t (a)
select 2
from (
select count(*) as s
from t
where a = 2
) s
where s.s = 0
;
select id
from t
where a = 2
;
I need to test my mail server. How can I make a Select statement
that selects say ID=5469 a thousand times.
If I get your meaning then a very simple way is to cross join on a derived query on a table with more than 1000 rows in it and put a top 1000 on that. This would duplicate your results 1000 times.
EDIT: As an example (This is MSSQL, I don't know if Access is much different)
SELECT
MyTable.*
FROM
MyTable
CROSS JOIN
(
SELECT TOP 1000
*
FROM
sysobjects
) [BigTable]
WHERE
MyTable.ID = 1234
You can use the UNION ALL statement.
Try something like:
SELECT * FROM tablename WHERE ID = 5469
UNION ALL
SELECT * FROM tablename WHERE ID = 5469
You'd have to repeat the SELECT statement a bunch of times but you could write a bit of VB code in Access to create a dynamic SQL statement and then execute it. Not pretty but it should work.
Create a helper table for this purpose:
JUST_NUMBER(NUM INT primary key)
Insert (with the help of some (VB) script) numbers from 1 to N. Then execute this unjoined query:
SELECT MYTABLE.*
FROM MYTABLE,
JUST_NUMBER
WHERE MYTABLE.ID = 5469
AND JUST_NUMBER.NUM <= 1000
Here's a way of using a recursive common table expression to generate some empty rows, then to cross join them back onto your desired row:
declare #myData table (val int) ;
insert #myData values (666),(888),(777) --some dummy data
;with cte as
(
select 100 as a
union all
select a-1 from cte where a>0
--generate 100 rows, the max recursion depth
)
,someRows as
(
select top 1000 0 a from cte,cte x1,cte x2
--xjoin the hundred rows a few times
--to generate 1030301 rows, then select top n rows
)
select m.* from #myData m,someRows where m.val=666
substitute #myData for your real table, and alter the final predicate to suit.
easy way...
This exists only one row into the DB
sku = 52 , description = Skullcandy Inkd Green ,price = 50,00
Try to relate another table in which has no constraint key to the main table
Original Query
SELECT Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod WHERE Prod_SKU = N'52'
The Functional Query ...adding a not related table called 'dbo.TB_Labels'
SELECT TOP ('times') Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod,dbo.TB_Labels WHERE Prod_SKU = N'52'
In postgres there is a nice function called generate_series. So in postgreSQL it is as simple as:
select information from test_table, generate_series(1, 1000) where id = 5469
In this way, the query is executed 1000 times.
Example for postgreSQL:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; --To be able to use function uuid_generate_v4()
--Create a test table
create table test_table (
id serial not null,
uid UUID NOT NULL,
CONSTRAINT uid_pk PRIMARY KEY(id));
-- Insert 10000 rows
insert into test_table (uid)
select uuid_generate_v4() from generate_series(1, 10000);
-- Read the data from id=5469 one thousand times
select id, uid, uuid_generate_v4() from test_table, generate_series(1, 1000) where id = 5469;
As you can see in the result below, the data from uid is read 1000 times as confirmed by the generation of a new uuid at every new row.
id |uid |uuid_generate_v4
----------------------------------------------------------------------------------------
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5630cd0d-ee47-4d92-9ee3-b373ec04756f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"ed44b9cb-c57f-4a5b-ac9a-55bd57459c02"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"3428b3e3-3bb2-4e41-b2ca-baa3243024d9"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7c8faf33-b30c-4bfa-96c8-1313a4f6ce7c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"b589fd8a-fec2-4971-95e1-283a31443d73"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"8b9ab121-caa4-4015-83f5-0c2911a58640"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7ef63128-b17c-4188-8056-c99035e16c11"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5bdc7425-e14c-4c85-a25e-d99b27ae8b9f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"9bbd260b-8b83-4fa5-9104-6fc3495f68f3"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"c1f759e1-c673-41ef-b009-51fed587353c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"4a70bf2b-ddf5-4c42-9789-5e48e2aec441"
Of course other DBs won't necessarily have the same function but it could be done:
See here.
If your are doing this in sql Server
declare #cnt int
set #cnt = 0
while #cnt < 1000
begin
select '12345'
set #cnt = #cnt + 1
end
select '12345' can be any expression
Repeat rows based on column value of TestTable. First run the Create table and insert statement, then run the following query for the desired result.
This may be another solution:
CREATE TABLE TestTable
(
ID INT IDENTITY(1,1),
Col1 varchar(10),
Repeats INT
)
INSERT INTO TESTTABLE
VALUES ('A',2), ('B',4),('C',1),('D',0)
WITH x AS
(
SELECT TOP (SELECT MAX(Repeats)+1 FROM TestTable) rn = ROW_NUMBER()
OVER (ORDER BY [object_id])
FROM sys.all_columns
ORDER BY [object_id]
)
SELECT * FROM x
CROSS JOIN TestTable AS d
WHERE x.rn <= d.Repeats
ORDER BY Col1;
This trick helped me in my requirement.
here, PRODUCTDETAILS is my Datatable
and orderid is my column.
declare #Req_Rows int = 12
;WITH cte AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number + 1 FROM cte WHERE Number < #Req_Rows
)
SELECT PRODUCTDETAILS.*
FROM cte, PRODUCTDETAILS
WHERE PRODUCTDETAILS.orderid = 3
create table #tmp1 (id int, fld varchar(max))
insert into #tmp1 (id, fld)
values (1,'hello!'),(2,'world'),(3,'nice day!')
select * from #tmp1
go
select * from #tmp1 where id=3
go 1000
drop table #tmp1
in sql server try:
print 'wow'
go 5
output:
Beginning execution loop
wow
wow
wow
wow
wow
Batch execution completed 5 times.
The easy way is to create a table with 1000 rows. Let's call it BigTable. Then you would query for the data you want and join it with the big table, like this:
SELECT MyTable.*
FROM MyTable, BigTable
WHERE MyTable.ID = 5469