Select random rows and stop when a specific sum/total is reached - sql

I'm using SQL Server 2012 and I'm trying to do something like this:
SELECT SUM(MILES) from tblName WHERE
mDate > = '03/01/2012' and
mDate <= '03/31/2012'
-- and...
/*
now I want to add here do until the SUM of Miles
is equal to or greater then '3250' and get the
results rows randomly
*/
So in other words, I want to select random rows from a table that have a specified from and to date and stop when the sum of miles is at or over the number: 3250

Since you're using SQL Server 2012, here is a much easier approach that doesn't require looping.
DECLARE #tbl TABLE(mDate DATE, Miles INT)
INSERT #tbl VALUES
('20120201', 520), ('20120312', 620),
('20120313', 720), ('20120314', 560),
('20120315', 380), ('20120316', 990),
('20120317', 1020), ('20120412', 520);
;WITH x AS
(
SELECT
mDate,
Miles,
s = SUM(Miles) OVER
(
ORDER BY NEWID() ROWS UNBOUNDED PRECEDING
)
FROM #tbl
WHERE mDate >= '20120301'
AND mDate < '20120401'
)
SELECT
mDate,
Miles,
s
FROM x
WHERE s <= 3250
ORDER BY s;
SQLfiddle demo - hit "Run SQL" multiple times to see random results.

You can do SELECT TOP x ... ORDER BY newid() to get a sum of random rows. The problem lies in determining 'x'. You can't even be sure that the largest value of 'x' (number of rows that match the query) will have a total large enough to meet your requirements without testing that first:
DECLARE #stopAt int
DECLARE #x int
DECLARE #result int
SET #stopAt = 3250
SET #x = 1
SELECT #result = SUM(MILES) from tblName
WHERE
mDate >= '03/01/2012' and
mDate <= '03/31/2012'
IF (#result < #stopAt)
SELECT NULL -- this can't be done
ELSE
BEGIN
WHILE (1=1)
BEGIN
SELECT TOP (#x) #result = SUM(MILES) FROM tblName
WHERE
mDate >= '03/01/2012' and
mDate <= '03/31/2012'
ORDER BY newid()
IF #result >= #stopAt
BREAK
SET #x = #x + 1
END
SELECT #result
END
Just a note about this - the algorithm starts and 1 and increments up until a suitable match is found. A more efficient approach (for larger sets of data) might include a binary type search that caches the lowest suitable result and returns when the deepest node (or an exact match) is found.

I can't think of a way without a TSQL While... loop. This in combination with the TSQL paging with ROW_NUMBER() should get you there.
http://www.mssqltips.com/sqlservertip/1175/page-through-sql-server-results-with-the-rownumber-function/
In the ROW_NUMBER query, sum the Miles into another MileSum column, then in the while loop select the set all the rows that correspond with the ROW_NUMBER query while accumulating these MileSum values into a variable. Terminate when the variable exceeds 3250.

Try
SELECT MILES
, RANK() OVER (ORDER BY NEWID()) yourRank
FROM #tblName
WHERE miles>3250
AND mDate >= '03/01/2012'
AND mDate <= '03/31/2012'
ORDER BY yourRank
and then you can add a TOP 5 or whatever you want.
You get those in random order for sure.

Just a sample code for you to understand the concept.
create table temp(intt int)
insert into temp values(1)
insert into temp values(2)
insert into temp values(3)
insert into temp values(4)
insert into temp values(5)
insert into temp values(6)
insert into temp values(7)
insert into temp values(8)
insert into temp values(9)
insert into temp values(10)
insert into temp values(11)
insert into temp values(12)
insert into temp values(13)
insert into temp values(14)
insert into temp values(15)
insert into temp values(16)
insert into temp values(17)
insert into temp values(18)
insert into temp values(19)
insert into temp values(20)
insert into temp values(21)
declare #sum int = 0;
declare #prevInt int = 0;
while(#sum<50)
begin
set #prevInt = (select top(1) intt from temp order by newid());
set #sum = #sum + #prevInt;
end
set #sum = #sum-#prevInt;
select #sum
drop table temp
The reason for this approach is that paging would not return wide spread result unless and until you have thousands of records because in it the data is grouped into pages and with less records, the same page is hit multiple number of times giving the same result.
Also, there might be cases when a blank page is hit giving 0 as the result.(i don't know why sometimes a blank page is hit.)

Related

sql loop through a list and insert records into column

I want to loop through a list and insert each item into a column and iterate 1000 time. I am SQL noob - can anyone help me with this?
What I have so far:
DECLARE #Counter INT
DECLARE #myList varchar(100)
SET #Counter = 0
SET #myList = 'temp,humidity,dewpoint'
WHILE (#Counter <= 1000)
BEGIN
INSERT INTO [DBO].[tbl_var] (VariableNames)
VALUES (#myList)
SET #Counter = #Counter + 1
END
I get this error:
Cannot insert the value NULL into column 'VariableNames', table 'master.DBO.tbl_var'; column does not allow nulls. INSERT fails.
What I expected
VariableNames column
1. temp
2. humidity
3. dewpoint
4. temp
5. humidity
6. dewpoint
and so on until 1000 iterations of list is complete
If you want to INSERT each value 1000 times then use a VALUES table construct and CROSS JOIN to a tally containing 1,000 rows. I use an inline tally in the solution below:
USE YourUserDatabase;
GO
WITH N AS(
SELECT N
FROM (VALUES(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL))N(N)),
Tally AS(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS I
FROM N N1, N N2, N N3) --1,000 rows
INSERT INTO dbo.tbl_var (VariableNames)
SELECT V.VariableName
FROM (VALUES('temp'),
('humidity'),
('dewpoint'),
('temp'),
('humidity'),
('dewpoint'))V(VariableName)
CROSS JOIN Tally T;
I have not tested but this should work. You can use STRING_SPLIT function https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql?view=sql-server-ver16
DECLARE #Counter INT =1
DECLARE #myList varchar(100)
SET #Counter=1
SET #myList = 'temp,humidity,dewpoint'
CREATE TABLE #TEMP (value varchar(255),cardinal Int)
INSERT INTO #TEMP(value,cardinal)
SELECT * FROM STRING_SPLIT(#myList, ',',1);
WHILE ( #Counter <= 1000)
BEGIN
INSERT INTO [DBO].[tbl_var] (VariableNames)
SELECT value from #TEMP WHERE cardinal =#counter
SET #Counter = #Counter + 1
END

SQL Server - loop through table and update based on count

I have a SQL Server database. I need to loop through a table to get the count of each value in the column 'RevID'. Each value should only be in the table a certain number of times - for example 125 times. If the count of the value is greater than 125 or less than 125, I need to update the column to ensure all values in the RevID (are over 25 different values) is within the same range of 125 (ok to be a few numbers off)
For example, the count of RevID = "A2" is = 45 and the count of RevID = 'B2' is = 165 then I need to update RevID so the 45 count increases and the 165 decreases until they are within the 125 range.
This is what I have so far:
DECLARE #i INT = 1,
#RevCnt INT = SELECT RevId, COUNT(RevId) FROM MyTable group by RevId
WHILE(#RevCnt >= 50)
BEGIN
UPDATE MyTable
SET RevID= (SELECT COUNT(RevID) FROM MyTable)
WHERE RevID < 50)
#i = #i + 1
END
I have also played around with a cursor and instead of trigger. Any idea on how to achieve this? Thanks for any input.
Okay I cam back to this because I found it interesting even though clearly there are some business rules/discussion that you and I and others are not seeing. anyway, if you want to evenly and distribute arbitrarily there are a few ways you could do it by building recursive Common Table Expressions [CTE] or by building temp tables and more. Anyway here is a way that I decided to give it a try, I did utilize 1 temp table because sql was throwing in a little inconsistency with the main logic table as a cte about every 10th time but the temp table seems to have cleared that up. Anyway, this will evenly spread RevId arbitrarily and randomly assigning any remainder (# of Records / # of RevIds) to one of the RevIds. This script also doesn't rely on having a UniqueID or anything it works dynamically over row numbers it creates..... here you go just subtract out test data etc and you have what you more than likely want. Though rebuilding the table/values would probably be easier.
--Build Some Test Data
DECLARE #Table AS TABLE (RevId VARCHAR(10))
DECLARE #C AS INT = 1
WHILE #C <= 400
BEGIN
IF #C <= 200
BEGIN
INSERT INTO #Table (RevId) VALUES ('A1')
END
IF #c <= 170
BEGIN
INSERT INTO #Table (RevId) VALUES ('B2')
END
IF #c <= 100
BEGIN
INSERT INTO #Table (RevId) VALUES ('C3')
END
IF #c <= 400
BEGIN
INSERT INTO #Table (RevId) VALUES ('D4')
END
IF #c <= 1
BEGIN
INSERT INTO #Table (RevId) VALUES ('E5')
END
SET #C = #C+ 1
END
--save starting counts of test data to temp table to compare with later
IF OBJECT_ID('tempdb..#StartingCounts') IS NOT NULL
BEGIN
DROP TABLE #StartingCounts
END
SELECT
RevId
,COUNT(*) as Occurences
INTO #StartingCounts
FROM
#Table
GROUP BY
RevId
ORDER BY
RevId
/************************ This is the main method **********************************/
--clear temp table that is the main processing logic
IF OBJECT_ID('tempdb..#RowNumsToChange') IS NOT NULL
BEGIN
DROP TABLE #RowNumsToChange
END
--figure out how many records there are and how many there should be for each RevId
;WITH cteTargetNumbers AS (
SELECT
RevId
--,COUNT(*) as RevIdCount
--,SUM(COUNT(*)) OVER (PARTITION BY 1) / COUNT(*) OVER (PARTITION BY 1) +
--CASE
--WHEN ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY NEWID()) <=
--SUM(COUNT(*)) OVER (PARTITION BY 1) % COUNT(*) OVER (PARTITION BY 1)
--THEN 1
--ELSE 0
--END as TargetNumOfRecords
,SUM(COUNT(*)) OVER (PARTITION BY 1) / COUNT(*) OVER (PARTITION BY 1) +
CASE
WHEN ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY NEWID()) <=
SUM(COUNT(*)) OVER (PARTITION BY 1) % COUNT(*) OVER (PARTITION BY 1)
THEN 1
ELSE 0
END - COUNT(*) AS NumRecordsToUpdate
FROM
#Table
GROUP BY
RevId
)
, cteEndRowNumsToChange AS (
SELECT *
,SUM(CASE WHEN NumRecordsToUpdate > 1 THEN NumRecordsToUpdate ELSE 0 END)
OVER (PARTITION BY 1 ORDER BY RevId) AS ChangeEndRowNum
FROM
cteTargetNumbers
)
SELECT
*
,LAG(ChangeEndRowNum,1,0) OVER (PARTITION BY 1 ORDER BY RevId) as ChangeStartRowNum
INTO #RowNumsToChange
FROM
cteEndRowNumsToChange
;WITH cteOriginalTableRowNum AS (
SELECT
RevId
,ROW_NUMBER() OVER (PARTITION BY RevId ORDER BY (SELECT 0)) as RowNumByRevId
FROM
#Table t
)
, cteRecordsAllowedToChange AS (
SELECT
o.RevId
,o.RowNumByRevId
,ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY (SELECT 0)) as ChangeRowNum
FROM
cteOriginalTableRowNum o
INNER JOIN #RowNumsToChange t
ON o.RevId = t.RevId
AND t.NumRecordsToUpdate < 0
AND o.RowNumByRevId <= ABS(t.NumRecordsToUpdate)
)
UPDATE o
SET RevId = u.RevId
FROM
cteOriginalTableRowNum o
INNER JOIN cteRecordsAllowedToChange c
ON o.RevId = c.RevId
AND o.RowNumByRevId = c.RowNumByRevId
INNER JOIN #RowNumsToChange u
ON c.ChangeRowNum > u.ChangeStartRowNum
AND c.ChangeRowNum <= u.ChangeEndRowNum
AND u.NumRecordsToUpdate > 0
IF OBJECT_ID('tempdb..#RowNumsToChange') IS NOT NULL
BEGIN
DROP TABLE #RowNumsToChange
END
/***************************** End of Main Method *******************************/
-- Compare the results and clean up
;WITH ctePostUpdateResults AS (
SELECT
RevId
,COUNT(*) as AfterChangeOccurences
FROM
#Table
GROUP BY
RevId
)
SELECT *
FROM
#StartingCounts s
INNER JOIN ctePostUpdateResults r
ON s.RevId = r.RevId
ORDER BY
s.RevId
IF OBJECT_ID('tempdb..#StartingCounts') IS NOT NULL
BEGIN
DROP TABLE #StartingCounts
END
Since you've given no rules for how you'd like the balance to operate we're left to speculate. Here's an approach that would find the most overrepresented value and then find an underrepresented value that can take on the entire overage.
I have no idea how optimal this is and it will probably run in an infinite loop without more logic.
declare #balance int = 125;
declare #cnt_over int;
declare #cnt_under int;
declare #revID_overrepresented varchar(32);
declare #revID_underrepresented varchar(32);
declare #rowcount int = 1;
while #rowcount > 0
begin
select top 1 #revID_overrepresented = RevID, #cnt_over = count(*)
from T
group by RevID
having count(*) > #balance
order by count(*) desc
select top 1 #revID_underrepresented = RevID, #cnt_under = count(*)
from T
group by RevID
having count(*) < #balance - #cnt_over
order by count(*) desc
update top #cnt_over - #balance T
set RevId = #revID_underrepresented
where RevId = #revID_overrepresented;
set #rowcount = ##rowcount;
end
The problem is I don't even know what you mean by balance...You say it needs to be evenly represented but it seems like you want it to be 125. 125 is not "even", it is just 125.
I can't tell what you are trying to do, but I'm guessing this is not really an SQL problem. But you can use SQL to help. Here is some helpful SQL for you. You can use this in your language of choice to solve the problem.
Find the rev values and their counts:
SELECT RevID, COUNT(*)
FROM MyTable
GROUP BY MyTable
Update #X rows (with RevID of value #RevID) to a new value #NewValue
UPDATE TOP #X FROM MyTable
SET RevID = #NewValue
WHERE RevID = #RevID
Using these two queries you should be able to apply your business rules (which you never specified) in a loop or whatever to change the data.

Loop through sql result set and remove [n] duplicates

I've got a SQL Server db with quite a few dupes in it. Removing the dupes manually is just not going to be fun, so I was wondering if there is any sort of sql programming or scripting I can do to automate it.
Below is my query that returns the ID and the Code of the duplicates.
select a.ID, a.Code
from Table1 a
inner join (
SELECT Code
FROM Table1 GROUP BY Code HAVING COUNT(Code)>1)
x on x.Code= a.Code
I'll get a return like this, for example:
5163 51727
5164 51727
5165 51727
5166 51728
5167 51728
5168 51728
This snippet shows three returns for each ID/Code (so a primary "good" record and two dupes). However this isnt always the case. There can be up to [n] dupes, although 2-3 seems to be the norm.
I just want to somehow loop through this result set and delete everything but one record. THE RECORDS TO DELETE ARE ARBITRARY, as any of them can be "kept".
You can use row_number to drive your delete.
ie
CREATE TABLE #table1
(id INT,
code int
);
WITH cte AS
(select a.ID, a.Code, ROW_NUMBER() OVER(PARTITION by COdE ORDER BY ID) AS rn
from #Table1 a
)
DELETE x
FROM #table1 x
JOIN cte ON x.id = cte.id
WHERE cte.rn > 1
But...
If you are going to be doing a lot of deletes from a very large table you might be better off to select out the rows you need into a temp table & then truncate your table and re-insert the rows you need.
Keeps the Transaction log from getting hammered, your CI getting Fragged and should be quicker too!
It is actually very simple:
DELETE FROM Table1
WHERE ID NOT IN
(SELECT MAX(ID)
FROM Table1
GROUP BY CODE)
Self join solution with a performance test VS cte.
create table codes(
id int IDENTITY(1,1) NOT NULL,
code int null,
CONSTRAINT [PK_codes_id] PRIMARY KEY CLUSTERED
(
id ASC
))
declare #counter int, #code int
set #counter = 1
set #code = 1
while (#counter <= 1000000)
begin
print ABS(Checksum(NewID()) % 1000)
insert into codes(code) select ABS(Checksum(NewID()) % 1000)
set #counter = #counter + 1
end
GO
set statistics time on;
delete a
from codes a left join(
select MIN(id) as id from codes
group by code) b
on a.id = b.id
where b.id is null
set statistics time off;
--set statistics time on;
-- WITH cte AS
-- (select a.id, a.code, ROW_NUMBER() OVER(PARTITION by code ORDER BY id) AS rn
-- from codes a
-- )
-- delete x
-- FROM codes x
-- JOIN cte ON x.id = cte.id
-- WHERE cte.rn > 1
--set statistics time off;
Performance test results:
With Join:
SQL Server Execution Times:
CPU time = 3198 ms, elapsed time = 3200 ms.
(999000 row(s) affected)
With CTE:
SQL Server Execution Times:
CPU time = 4197 ms, elapsed time = 4229 ms.
(999000 row(s) affected)
It's basically done like this:
WITH CTE_Dup AS
(
SELECT*,
ROW_NUMBER()OVER (PARTITIONBY SalesOrderno, ItemNo ORDER BY SalesOrderno, ItemNo)
AS ROW_NO
from dbo.SalesOrderDetails
)
DELETEFROM CTE_Dup WHERE ROW_NO > 1;
NOTICE: MUST INCLUDE ALL FIELDS!!
Here is another example:
CREATE TABLE #Table (C1 INT,C2 VARCHAR(10))
INSERT INTO #Table VALUES (1,'SQL Server')
INSERT INTO #Table VALUES (1,'SQL Server')
INSERT INTO #Table VALUES (2,'Oracle')
SELECT * FROM #Table
;WITH Delete_Duplicate_Row_cte
AS (SELECT ROW_NUMBER()OVER(PARTITION BY C1, C2 ORDER BY C1,C2) ROW_NUM,*
FROM #Table )
DELETE FROM Delete_Duplicate_Row_cte WHERE ROW_NUM > 1
SELECT * FROM #Table

NTILE alternative for non-uniformely distributed data sets

I have a data set and want to display it, but it can be very huge (thousands of points), and I want to filter them. For example here is output for 1000+ points:
Now i use NTILE to get approximation, but it doesn't work as expexted if points are not distributed uniformly. And I get this output (NTILE with parameter 100):
How can I avoid this behaviour? SQL stored procedure is below:
ALTER PROCEDURE [dbo].[usp_GetSystemHealthCheckData]
#DateFrom datetime,
#DateTo datetime,
#EstimatedPointCount int
with recompile
AS
BEGIN
SET NOCOUNT ON;
set arithabort on
if #DateFrom IS NULL
RAISERROR ('#DateFrom cannot be NULL', 16, 1)
if #DateTo IS NULL
RAISERROR ('#DateTo cannot be NULL', 16, 1)
if #EstimatedPointCount IS NULL
RAISERROR ('#EstimatedPointCount cannot be NULL', 16, 1)
;With T as
(
SELECT *, GroupId = NTILE(#EstimatedPointCount) over (order by GeneratedOnUtc)
FROM SystemHealthCheckData
WHERE GeneratedOnUtc between #DateFrom AND #DateTo
)
SELECT CpuPercentPayload = AVG(CpuPercentPayload),
FreeRamMb = AVG(FreeRamMb),
FreeDriveMb = AVG(FreeDriveMb),
GeneratedOnUtc = CAST(AVG(CAST(GeneratedOnUtc AS DECIMAL( 18, 6))) AS DATETIME)
FROM T
GROUP BY GroupId
END
EDIT: new approach
You might split your load with NTILE and then calculate an average for each group? I splitted my set in 4 groups. This lets the query come back with 4 average values. The number of groups could be calculated from the number of points you have or could be done fix.
Something like this:
DECLARE #tbl TABLE(id INT IDENTITY, nmbr FLOAT);
INSERT INTO #tbl VALUES(5),(4.5),(4),(3.5),(3),(2.5),(2),(1.5),(1),(1.5),(1),(0.5),(0),(13),(2),(17),(5),(22),(24),(2),(3),(11);
SELECT tbl2.*
,AVG(nmbr) OVER(PARTITION BY tbl2.tile)
FROM
(
SELECT tbl.*
,NTILE(4) OVER(ORDER BY id) AS tile
FROM #tbl AS tbl
)AS tbl2
If you want it reduced to the group values only you could try this
SELECT AVG(nmbr),tbl2.tile
FROM
(
SELECT tbl.*
,NTILE(4) OVER(ORDER BY id) AS tile
FROM #tbl AS tbl
)AS tbl2
GROUP BY tbl2.tile
--old text
You maybe want to think about a sliding average... In this example I tried to rebuild your values (long linear falling and wild jumping at the end). You can set the #pre and #post variables to set the grade of "flatening".
In short: There is an average calculated for each element and its direct neighbours.
Be aware of the fact that you must add an ORDER BY to avoid random results...
DECLARE #tbl TABLE(id INT IDENTITY, nmbr FLOAT);
INSERT INTO #tbl VALUES(5),(4.5),(4),(3.5),(3),(2.5),(2),(1.5),(1),(1.5),(1),(0.5),(0),(13),(2),(17),(5),(22),(24),(2),(3),(11);
DECLARE #pre INT=3;
DECLARE #post INT=3;
SELECT tbl.*
,AvgBorders.*
,AvgSums.*
,AvgSlide.*
FROM #tbl AS tbl
CROSS APPLY
(
SELECT tbl.id-#pre AS AvgStart
,tbl.id + #post AS AvgEnd
) AS AvgBorders
CROSS APPLY
(
SELECT COUNT(nmbr) AS CountNmbr
,SUM(nmbr) AS SumNmbr
FROM #tbl AS tbl
WHERE tbl.id BETWEEN AvgStart AND AvgEnd
) as AvgSums
CROSS APPLY
(
select AvgSums.SumNmbr / AvgSums.CountNmbr As AvgValue
) As AvgSlide
;

Selecting rows that don't exist physically in the database

I've totally rewritten my question because the simplicity of the previous one people were taking too literally.
The aim:
INSERT INTO X
SELECT TOP 23452345 NEWID()
This query should insert 23452345 GUIDs to the "X" table. actually 23452345 means just any possible number that is entered by user and stored in database.
So the problem is that inserting rows to a database by using
INSERT INTO ... SELECT ...
statement requires you to already have the required amount of rows inserted to database.
Naturally you can emulate the existence of rows by using temporary data and cross joining it but this (in my stupid opinion) creates more results than needed and in some extreme situations might fail due to many unpredicted reasons. I need to be sure that if user entered extremely huge number like 2^32 or even bigger the system will work and behave normally without any possible side effects like extreme memory/time consumption etc...
In all fairness I derived the idea from this site.
;WITH cte AS
(
SELECT 1 x
UNION ALL
SELECT x + 1
FROM cte
WHERE x < 100
)
SELECT NEWID()
FROM cte
EDIT:
The general method we're seeing is to select from a table that has the desired number of rows. It's hackish, but you can create a table, insert the desired number of records, and select from it.
create table #num
(
num int
)
declare #i int
set #i = 1
while (#i <= 77777)
begin
insert into #num values (#i)
set #i = #i + 1
end
select NEWID() from #num
drop table #num
Of course creating a Number table is the best approach and will come in handy. You should definitely have one at your disposal. If you need something as a one-off just join to a known table. I usually use a system table such as spt_values:
declare #result table (id uniqueidentifier)
declare #sDate datetime
set #sDate = getdate();
;with num (n)
as ( select top(777777) row_number() over(order by t1.number) as N
from master..spt_values t1
cross join master..spt_values t2
)
insert into #result(id)
select newid()
from num;
select datediff(ms, #sDate, getdate()) [elasped]
I'd create an integers table and use it. This type of table comes in handy many situations.
CREATE TABLE dbo.Integers
(
i INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
)
WHILE COALESCE(SCOPE_IDENTITY(), 0) <= 100000 /* or some other large value */
BEGIN
INSERT dbo.Integers DEFAULT VALUES
END
Then all you need to do it:
SELECT NEWID()
FROM Integers
WHERE i <= 77777
Try this:
with
L0 as (select 1 as C union all select 1) --2 rows
,L1 as (select 1 as C from L0 as A, L0 as B) --4 rows
,L2 as (select 1 as C from L1 as A, L1 as B) --16 rows
,L3 as (select 1 as C from L2 as A, L2 as B) --256 rows
select top 100 newid() from L3
SELECT TOP 100 NEWID() from sys.all_columns
Or any other datasource that has a large number of records. You can build your own table for 'counting' functionality as such, you can use it in lieu of while loops.
Tally tables: http://www.sqlservercentral.com/articles/T-SQL/62867