Uniformly distributed random - sql

I wonder if someone knows how can I generate in Sql Server random values within a range uniformly distributed. This is what I did:
SELECT ID, AlgorithmType, AlgorithmID
FROM TEvaluateAlgorithm
I want AlgorithmID takes values from 0 to 15, has to be uniformly distributed
UPDATE TEA SET TEA.AlgorithmID = FLOOR(RAND(CONVERT(VARBINARY, NEWID()))*(16))
-- FROM TEvaluateAlgorithm TEA
I do not know what happen with the random, but is not distributing uniform random values between 0 and 15, not with the same amount.
For example from 0 to 9 is greater than from 10 to 15.
Thanks in advance!
EDITED:
Here is my data you can see the difference...
AlgorithmID COUNT(*)
0 22254
1 22651
2 22806
3 22736
4 22670
5 22368
6 22690
7 22736
8 22646
9 22536
10 14479
11 14787
12 14553
13 14546
14 14574
15 14722

rand() doesn't do a good job with this. Because you want integers, I would suggest the following:
select abs(checksum(newid()) % 16
I just checked this using:
select val, count(*)
from (select abs(checksum(newid()) % 16
from master..spt_values
) t
group by val
order by val;
and the distribution looks reasonable.

Here's a quick proof of concept.
Set #Loops to something big enough to make the statistics meaningful. 50k seems like a decent starting point.
Set #MinValue to the lowest integer in your set and set #TotalValues to how many integers you want in your set. 0 and 16 get you the 16 values [0-15], as noted in the question.
We're going to use a random function to cram 50k outputs into a temp table, then run some stats on it...
DECLARE #MinValue int
DECLARE #TotalValues int
SET #MinValue = 0
SET #TotalValues = 16
DECLARE #LoopCounter bigint
SET #LoopCounter = 0
DECLARE #Loops bigint
SET #Loops = 50000
CREATE TABLE #RandomValues
(
RandValue int
)
WHILE #LoopCounter < #Loops
BEGIN
INSERT INTO #RandomValues (RandValue) VALUES (FLOOR(RAND()*(#TotalValues-#MinValue)+#MinValue))
--you can plug into the right side of the above equation any other randomize formula you want to test
SET #LoopCounter = #LoopCounter + 1
END
--raw data query
SELECT
RandValue AS [Value],
COUNT(RandValue) AS [Occurrences],
((CONVERT(real, COUNT(RandValue))) / CONVERT(real, #Loops)) * 100.0 AS [Percentage]
FROM
#RandomValues
GROUP BY
RandValue
ORDER BY
RandValue ASC
--stats on your random query
SELECT
MIN([Percentage]) AS [Min %],
MAX([Percentage]) AS [Max %],
STDEV([Percentage]) AS [Standard Deviation]
FROM
(
SELECT
RandValue AS [Value],
COUNT(RandValue) AS [Occurrences],
((CONVERT(real, COUNT(RandValue))) / CONVERT(real, #Loops)) * 100.0 AS [Percentage]
FROM
#RandomValues
GROUP BY
RandValue
--ORDER BY
-- RandValue ASC
) DerivedRawData
DROP TABLE #RandomValues
Note that you can plug in any other randomizing formula into the right side of the INSERT statement within the WHILE loop then re-run to see if you like the results better. "Evenly distributed" is kinda subjective, but the standard deviation result is quantifiable and you can determine if it is acceptable or not.

Related

Divide two numbers to a range without using parts

I have to divide two numbers based on the increment provided by the users, I found an answer which is based on the parts.
I can not use parts as it effects numbers of rows in result, rather I have to pass increment.
Here is my query using parts.
declare #min numeric(18,0)
declare #max numeric(18,0)
declare #parts numeric(18,0)
select #min = 100 ,
#max = 204,
#parts = 10
declare #increment int = (#max - #min) / #parts
while #max >= #min
begin
declare #newMin numeric(18,0) = #min + #increment
print convert(varchar, #min) + ' - ' + convert(varchar, #newMin) select #min = #newMin + 1
end
Expected output, as you can see in the query my input is min and max with parts based on which the increments are calculating but I have to fix increment like 10 or 100.
From To
--------
100 110
111 121
122 132
133 143
144 154
155 165
166 176
177 187
188 198
199 204
There are 2 answers here, 1 based on the original version of the question (where To could be larger than #max) and that your goal is that #parts is the value of numbers in you want in the bucket (+1). The latter is that you want to split the range into that many buckets, and the last bucket is shrunk if the upper value is larger than the #max value. Both use a Tally function here, which I include the definition of:
CREATE FUNCTION [fn].[Tally] (#End bigint, #StartAtOne bit)
RETURNS table
AS RETURN
WITH N AS(
SELECT N
FROM (VALUES(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL))N(N)),
Tally AS(
SELECT 0 AS I
WHERE #StartAtOne = 0
UNION ALL
SELECT TOP (#End)
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS I
FROM N N1, N N2, N N3, N N4, N N5, N N6, N N7, N N8)
SELECT I
FROM Tally;
GO
DECLARE #min numeric(18,0) = 100,
#max numeric(18,0) = 304,
#parts numeric(18,0) = 10;
SELECT #min + (T.I * (#parts+1)),
#min + ((T.I+1) * (#parts+1))-1
FROM fn.Tally((#max - #min)/(#parts+1),0) T;
SELECT #min + (T.I * CEILING(((#max - #min)/#parts))),
CASE WHEN #min + ((T.I+1)* CEILING(((#max - #min)/#parts)))-1 > #max THEN #max ELSE #min + ((T.I+1)* CEILING(((#max - #min)/#parts)))-1 END
FROM fn.Tally(#parts-1,0) T
db<>fiddle
The question is unclear and doesn't describe the actual problem that needs solving. The code shows a way to partition a continuous range of numbers 100-204 into N partitions, in this case specified by #parts. It's not very efficient but it works.
Partitioning a range into parts is a popular SQL puzzle so there are a lot of articles over the past 30-40 years that show how to do it for different databases, using different features and trying to get the best performance. In the simple form there are two ways to partition :
By part count, what you have
By part size
If you don't want by part count, you probably want by part size. Doing this isn't complicated either and doesn't require slow loops.
Assuming we have a Tally table named Numbers, with all numbers up to eg 1M, the query to partition a range would be :
declare #start int=100
declare #end int=204
declare #size int=25
;with parts as (
select Number,(Number-#start)/#size as part_id
from numbers where number between #start and #end
)
select part_id,min(number) as [Start],max(number) as [End]
from parts
group by part_id
----------------------
part_id Start End
0 100 124
1 125 149
2 150 174
3 175 199
4 200 204
First, the Number is divided by the size we want using integer division to determine the part it belongs to. The results are then grouped by the Part ID and the range limits are the minimum and maximum Number in that group.
Creating a Numbers table is cheap. I used this script to generate a table with 1M numbers which takes about 11MB only:
DECLARE #UpperBound INT = 1000000;
;WITH cteN(Number) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY s1.[object_id]) - 1
FROM sys.all_columns AS s1
CROSS JOIN sys.all_columns AS s2
)
SELECT [Number] INTO dbo.Numbers
FROM cteN WHERE [Number] <= #UpperBound;
CREATE UNIQUE CLUSTERED INDEX CIX_Number ON dbo.Numbers([Number])
WITH
(
FILLFACTOR = 100,
DATA_COMPRESSION = ROW
);
Data compression is available in all SQL Server versions and editions since SQL Server 2016 SP1.
The same technique can be used to partition a range into N parts, using the NTILE function this time :
declare #parts int=10
;with parts as (
select number, NTILE(#parts) over(order by number) as part_id
from numbers where number between #start and #end
)
select part_id,min(number) as [Start],max(number) as [End]
from parts
group by part_id
In real business cases NTILE is used to partition results into "buckets"

Query with row by row calculation for running total

I have a problem where jobs become 'due' at the start of a week and each week there are a certain number of 'slots' available to complete any outstanding jobs. If there are not enough slots then the jobs roll over to the next week.
My initial table looks like this:
Week
Slots
Due
23/8/2021
0
1
30/8/2021
2
3
6/9/2021
5
2
13/9/2021
1
4
I want to maintain a running total of the number of 'due' jobs at the end of each week.
Each week the number due would be added to the running total from last week, then the number of slots this week would be subtracted. If there are enough slots to do all the jobs required then the running total will be 0 (never negative).
As an example - the below shows how I would achieve this in javascript:
var Total = 0;
data.foreach(function(d){
Total += d.Due;
Total -= d.Slots;
Total = Total > 0 ? Total : 0;
d.Total = Total;
});
The result would be as below:
Week
Slots
Due
Total
23/8/2021
0
1
1
30/8/2021
2
3
2
6/9/2021
5
2
0
13/9/2021
1
4
3
Is it possible for me to achieve this in SQL (specifically SQL Server 2012)
I have tried various forms of sum(xxx) over (order by yyy)
Closest I managed was:
sum(Due) over (order by Week) - sum(Slots) over (order by Week) as Total
This provided a running total, but will provide a negative total when there are excess slots.
Is the only way to do this with a cursor? If so - any suggestions?
Thanks.
Possible answer(s) to my own question based on suggestions in comments.
Thorsten Kettner suggested a recursive query:
with cte as (
select [Week], [Due], [Slots]
,case when Due > Slots then Due - Slots else 0 end as [Total]
from [Data]
where [Week] = (select top 1 [Week] from [Data])
union all
select e.[Week], e.[Due], e.[Slots]
, case when cte.Total + e.Due - e.Slots > 0 then cte.Total + e.Due - e.Slots else 0 end as [Total]
from [Data] e
inner join cte on cte.[Week] = dateadd(day,-7,e.[Week])
)
select * from cte
OPTION (MAXRECURSION 200)
Thorsten - is this what you were suggesting? (If you have any improvements, please post as an answer so I can accept it!)
Presumably I have to ensure that MAXRECURSION is set to something higher than the number of rows I will be dealing with?
I am a little bit nervous about the join on dateadd(day,-7,e.[Week]). Would I be better doing something with Row_Number() to get the previous record? I may want to use something other than weeks, or weeks may be missing?
George Menoutis suggested a 'while' query and I was looking for ways to implement that when I came across this post: https://stackoverflow.com/a/35471328/1372848
This suggested that a cursor may not be all that bad compared to a while?
This is the cursor based version I came up with:
SET NOCOUNT ON;
DECLARE #Week Date,
#Due Int,
#Slots Int,
#Total Int = 0;
DECLARE #Output TABLE ([Week] Date NOT NULL, Due Int NOT NULL, Slots Int NOT NULL, Total Int);
DECLARE crs CURSOR STATIC LOCAL READ_ONLY FORWARD_ONLY
FOR SELECT [Week], Due, Slots
FROM [Data]
ORDER BY [Week] ASC;
OPEN crs;
FETCH NEXT
FROM crs
INTO #Week, #Due, #Slots;
WHILE (##FETCH_STATUS = 0)
BEGIN
Set #Total = #Total + #Due;
Set #Total = #Total - #Slots;
Set #Total = IIF(#Total > 0, #Total , 0)
INSERT INTO #Output ([Week], [Due], [Slots], [Total])
VALUES (#Week, #Due, #Slots, #Total);
FETCH NEXT
FROM crs
INTO #Week, #Due, #Slots;
END;
CLOSE crs;
DEALLOCATE crs;
SELECT *
FROM #Output;
Both of these seem to work as intended. The recursive query feels better (cursors = bad etc), but is it designed to be used this way (with a recursion for every input row and therefore potentially a very high number of recursions?)
Many thanks for everyone's input :-)
Improvement on previous answer following input from Thorsten
with numbered as (
select *, ROW_NUMBER() OVER (ORDER BY [Week]) as RN
from [Data]
)
,cte as (
select [Week], [Due], [Slots], [RN]
,case when Due > Slots then Due - Slots else 0 end as [Total]
from numbered
where RN = 1
union all
select e.[Week], e.[Due], e.[Slots], e.[RN]
, case when cte.Total + e.Due - e.Slots > 0 then cte.Total + e.Due - e.Slots else 0 end as [Total]
from numbered e
inner join cte on cte.[RN] = e.[RN] - 1
)
select * from cte
OPTION (MAXRECURSION 0)
Many thanks Thorsten for all your help.

Create evenly spaced out sequence in SQL

So this should be fairly simple, and I'm sure there's an embarrassingly easily solution I'm missing, but here goes:
I want to create a grid of numbers based on two numeric variables.
More specifically, I want to select the 5th and 95th percentile of each variable, then cut up the difference between those two values into 100 parts, and then group by those.
So basically what I need is in pseudocode
(5th percentile)+(95th percentile-5th percentile)/100*[all numbers from 0 to 100]
I can pick out the 5th and 95th percentile with the following query:
SELECT Min(subq.lat) as latitude, percentile FROM
(SELECT round(latitude,2) as lat, ntile(100) OVER (order by latitude desc) as
'percentile' FROM table ORDER BY latitude DESC) AS subq
where percentile in (5,95)
group by 2
And I can can create a list of numbers from 0 to 100 as well.
But how to combine those two is something that's a little beyond me.
Help would be much appreciated.
I'm not entirely sure I follow what you're after, but it could be as simple as looping through 1-100, performing your calculation for each set and inserting them into a results table:
CREATE TABLE #Results (Counter_ INT, Calc_Value FLOAT)
GO
DECLARE #intFlag INT
SET #intFlag = 1
WHILE (#intFlag <=100)
BEGIN
--Do Stuff
INSERT INTO #Results
SELECT Counter_ = #intFlag
,Calc_Value = (calculation logic)/#intFlag
SET #intFlag = #intFlag + 1
END
GO
The 'Do Stuff' portion gets executed for each value 1 - 100, the (Calculation logic) would obviously need to be replaced with whatever logic you use. If those values are constant for 1-100, you could set them as variables so they don't have to run 100 times. Roughly:
CREATE TABLE #Results (Counter_ INT, Calc_Value FLOAT)
GO
DECLARE #intFlag INT, #Percentile_Value FLOAT = (Calculation Logic)
SET #intFlag = 1
WHILE (#intFlag <=100)
BEGIN
--Do Stuff
INSERT INTO #Results
SELECT Counter_ = #intFlag
,Calc_Value = #Percentile_Value/#intFlag
SET #intFlag = #intFlag + 1
END
GO
You can do what you want to do with window functions. Basically, do the percentile calculation with row_number() and the total count.
Here is an example:
SELECT lat,
((seqnum - 0.05 * cnt) / (0.95 * cnt - 0.05 * cnt)) * 100 as NewPercentile
FROM (SELECT round(latitude,2) as lat,
row_number() over (order by latitude) as seqnum,
count(*) over () as cnt
FROM table
ORDER BY latitude DESC
) AS subq
where seqnum between 0.05 * cnt and 0.95 * cnt

Inventory Price Calculation in SQL

I'm in SQL 2005 and I'm trying to convert this Cursor into something that isn't a Cursor to determine if this is the most efficient way to do this.
--Create cursor to determint total cost
DECLARE CostCursor CURSOR FAST_FORWARD
FOR SELECT ReceiptQty
,Price
FROM #temp_calculate
ORDER BY UpdateDate DESC
OPEN CostCursor
FETCH Next FROM CostCursor INTO #ReceiptQty,#Price
WHILE ##FETCH_STATUS = 0
BEGIN
IF #OnHandQty >= #ReceiptQty
BEGIN
--SELECT #ReceiptQty,#Price, 1,#OnHandQty
SET #Cost = #ReceiptQty * #Price
SET #OnHandQty = #OnHandQty - #ReceiptQty
SET #TotalCost = #TotalCost + #Cost
END
ELSE
BEGIN
IF #OnHandQty < #ReceiptQty
BEGIN
--SELECT #ReceiptQty,#Price, 2,#OnHandQty
SET #Cost = #OnHandQty * #Price
SET #OnHandQty = 0
SET #TotalCost = #TotalCost + #Cost
BREAK;
END
END
FETCH Next FROM CostCursor INTO #ReceiptQty,#Price
END
CLOSE CostCursor
DEALLOCATE CostCursor
The system needs to go through and use the newest recieved inventory and price to determine what the paid for the on-hand is.
Ex. 1st Iteration: #OnHandQty = 8 RecievedQty = 5 Price = 1 UpdateDate = 1/20 Results: #HandQty = 3 #TotalCost = $5
2nd Iteration: #OnHandQty = 3 RecievedQty = 6 Price = 2 UpdateDate = 1/10 Results: #HandQty = 0 #TotalCost = $11
The Final Results tell me that the inventory I have on hand I paid $11 for. If I was doing this in C# or any other Object Oriented langauge this screams Recursion to me. I thought about a Recursive CTE could be more efficient. I've only successfully done any Recursive CTE's for Heirarchy following types of Queries and I haven't been able to successfully wrap my head around a query that would achieve this another way.
Any help or a simple thats how it has to be would be appreciated.
Here's a recursive CTE solution. A row number column has to be present to make it work. So I derived a new temp table (#temp_calculate2) containing a row number column. Ideally, the row number column would be present in #temp_calculate, but I don't know enough about your situation as to whether or not you can modify the structure of #temp_calculate.
It turns out there are four basic ways to calculate a running total in SQL Server 2005 and later: via a join, a subquery, a recursive CTE, and a cursor. I ran across a blog entry by Jerry Nixon that demonstrates the first three. The results are quite stunning. A recursive CTE is almost unbelievably fast compared to the join and subquery solutions.
Unfortunately, he didn't include a cursor solution. I created one and ran it on my computer using his example data. The cursor solution is only a little slower than the recursive CTE - 413ms vs. 273ms.
I don't know how much memory a cursor solution uses compared to a recursive CTE. I'm not good enough with SQL Profiler to get that data, but I'd be curious to see how the two approaches compare regarding memory usage.
SET NOCOUNT OFF;
DECLARE #temp_calculate TABLE
(
ReceiptQty INT,
Price FLOAT,
UpdateDate DATETIME
);
INSERT INTO #temp_calculate (ReceiptQty, Price, UpdateDate) VALUES (5, 1.0, '2012-1-20');
INSERT INTO #temp_calculate (ReceiptQty, Price, UpdateDate) VALUES (6, 2.0, '2012-1-10');
INSERT INTO #temp_calculate (ReceiptQty, Price, UpdateDate) VALUES (4, 3.0, '2012-1-08');
DECLARE #temp_calculate2 TABLE
(
RowNumber INT PRIMARY KEY,
ReceiptQty INT,
Price FLOAT
);
INSERT INTO #temp_calculate2
SELECT
RowNumber = ROW_NUMBER() OVER(ORDER BY UpdateDate DESC),
ReceiptQty,
Price
FROM
#temp_calculate;
;WITH LineItemCosts (RowNumber, ReceiptQty, Price, RemainingQty, LineItemCost)
AS
(
SELECT
RowNumber,
ReceiptQty,
Price,
8, -- OnHandQty
ReceiptQty * Price
FROM
#temp_calculate2
WHERE
RowNumber = 1
UNION ALL
SELECT
T2.RowNumber,
T2.ReceiptQty,
T2.Price,
LIC.RemainingQty - LIC.ReceiptQty,
(LIC.RemainingQty - LIC.ReceiptQty) * T2.Price
FROM
LineItemCosts AS LIC
INNER JOIN #temp_calculate2 AS T2 ON LIC.RowNumber + 1 = T2.RowNumber
)
/* Swap these SELECT statements to get a view of
all of the data generated by the CTE. */
--SELECT * FROM LineItemCosts;
SELECT
TotalCost = SUM(LineItemCost)
FROM
LineItemCosts
WHERE
LineItemCost > 0
OPTION
(MAXRECURSION 10000);
Here is one thing you can try. Admittedly, this isn't the type of thing I have to deal with real world, but I stay away from cursors. I took your temp table #temp_calculate and added an ID ordered by UPDATEDATE. You could also add the fields you are wanting in your output to your temp table - #HandQty and #TotalCost as well as a new one called IndividulaCost - and run this one query and use it to UPDATE #HandQty and IndividulaCost . Run one more UPDATE after, taking the same concept used here to get and update the total cost. (In fact you may be able to use some of this on your insert to your temp table and eliminate a step.)
I don't think it is great, but I do believe it is better than a cursor. Play with it and see what you think.
DECLARE #OnHandQty int
set #OnHandQty = 8
SELECT a.ID,
RECEIPTQty + TOTALOFFSET AS CURRENTOFFSET,
TOTALOFFSET,
CASE WHEN #OnHandQty - (RECEIPTQty + TOTALOFFSET) > 0 THEN RECEIPTQTY * PRICE
ELSE (#OnHandQty - TOTALOFFSET) * Price END AS CALCPRICE,
CASE WHEN #OnHandQty - RECEIPTQTY - TOTALOFFSET > 0 THEN #OnHandQty - RECEIPTQTY - TOTALOFFSET
ELSE 0 END AS HandQuantity
FROM SO_temp_calculate a
CROSS APPLY ( SELECT ISNULL(SUM(ReceiptQty), 0) AS TOTALOFFSET
FROM SO_temp_calculate B where a.id > b.id
) X
RETURNS:
ID CURRENTOFFSET TOTALOFFSET CALCPRICE HandQuantity
----------------------------------------------------------------
1 5 0 5 3
2 11 5 6 0
If you were using SQL SERVER 2012 you could use RANK functions with OVER clause and ROWS UNBOUNDED PRECEDING. Until you go there, this is one way to deal with Sliding Aggregations.
CREATE CLUSTERED INDEX IDX_C_RawData_ProductID_UpdateDate ON #RawData (ProductID ASC , UpdateDate DESC , RowNumber ASC)
DECLARE #TotalCost Decimal(30,5)
DECLARE #OnHandQty Decimal(18,5)
DECLARE #PreviousProductID Int
UPDATE #RawData
SET #TotalCost = TotalCost = CASE
WHEN RowNumber > 1
AND #OnHandQty >= ReceiptQuantity THEN #TotalCost + (ReceiptQuantity * Price)
WHEN RowNumber > 1
AND #OnHandQty < ReceiptQuantity THEN #TotalCost + (#OnHandQty * Price)
WHEN RowNumber = 1
AND OnHand >= ReceiptQuantity THEN (ReceiptQuantity * Price)
WHEN RowNumber = 1
AND OnHand < ReceiptQuantity THEN (OnHand * Price)
END
,#OnHandQty = OnHandQty = CASE
WHEN RowNumber > 1
AND #OnHandQty >= ReceiptQuantity THEN #OnHandQty - ReceiptQuantity
WHEN RowNumber > 1
AND #OnHandQty < ReceiptQuantity THEN 0
WHEN RowNumber = 1
AND OnHand >= ReceiptQuantity THEN (OnHand - ReceiptQuantity)
WHEN RowNumber = 1
AND OnHand < ReceiptQuantity THEN 0
END/*,
#PreviousProductID = ProductID*/
FROM #RawData WITH (TABLOCKX)
OPTION (MAXDOP 1)
Welp, this was the solution I ended up coming up with. I like to think the fine folks watching the #sqlhelp hashtag for pointing me to this article by Jeff Moden:
http://www.sqlservercentral.com/articles/T-SQL/68467/
I did end up having to use a Rownumber on the table because it wasn't getting the first set of cases correctly. Using this construct I brought retrieiving the dataset down from 17 minutes, best I been able to do, to 12 Seconds on my vastly slower dev box. I'm confident production will lower that even more.
I've tested the output and I get the exact same results as the old way except for when 2 items for the same product have different price and the update time is the exact same. One way may pick a different order then the other. It of 15,624 items that only happened once where the varience was >= a penny.
Thanks everyone who answered here. I ultimately went a different way but I wouldn't have found it without you.

Split query result by half in TSQL (obtain 2 resultsets/tables)

I have a query that returns a large number of heavy rows.
When I transform this rows in a list of CustomObject I have a big memory peak, and this transformation is made by a custom dotnet framework that I can't modify.
I need to retrieve a less number of rows to do "the transform" in two passes and then avoid the memory peak.
How can I split the result of a query by half? I need to do it in DB layer. I thing to do a "Top count(*)/2" but how to get the other half?
Thank you!
If you have identity field in the table, select first even ids, then odd ones.
select * from Table where Id % 2 = 0
select * from Table where Id % 2 = 1
You should have roughly 50% rows in each set.
Here is another way to do it from(http://www.tek-tips.com/viewthread.cfm?qid=1280248&page=5). I think it's more efficient:
Declare #Rows Int
Declare #TopRows Int
Declare #BottomRows Int
Select #Rows = Count(*) From TableName
If #Rows % 2 = 1
Begin
Set #TopRows = #Rows / 2
Set #BottomRows = #TopRows + 1
End
Else
Begin
Set #TopRows = #Rows / 2
Set #BottomRows = #TopRows
End
Set RowCount #TopRows
Select * From TableName Order By DisplayOrder
Set RowCount #BottomRows
Select * From TableNameOrder By DisplayOrderDESC
--- old answer below ---
Is this a stored procedure call or dynamic sql? Can you use temp tables?
if so, something like this would work
select row_number() OVER(order by yourorderfield) as rowNumber, *
INTO #tmp
FROM dbo.yourtable
declare #rowCount int
SELECT #rowCount = count(1) from #tmp
SELECT * from #tmp where rowNumber <= #rowCount / 2
SELECT * from #tmp where rowNumber > #rowCount / 2
DROP TABLE #tmp
SELECT TOP 50 PERCENT WITH TIES ... ORDER BY SomeThing
then
SELECT TOP 50 PERCENT ... ORDER BY SomeThing DESC
However, unless you snapshot the data first, a row in the middle may slip through or be processed twice
I don't think you should do that in SQL, unless you will always have a possibility to have the same record 2 times.
I would do it in an "software" programming language, not SQL. Java, .NET, C++, etc...