Inventory Price Calculation in SQL - sql

I'm in SQL 2005 and I'm trying to convert this Cursor into something that isn't a Cursor to determine if this is the most efficient way to do this.
--Create cursor to determint total cost
DECLARE CostCursor CURSOR FAST_FORWARD
FOR SELECT ReceiptQty
,Price
FROM #temp_calculate
ORDER BY UpdateDate DESC
OPEN CostCursor
FETCH Next FROM CostCursor INTO #ReceiptQty,#Price
WHILE ##FETCH_STATUS = 0
BEGIN
IF #OnHandQty >= #ReceiptQty
BEGIN
--SELECT #ReceiptQty,#Price, 1,#OnHandQty
SET #Cost = #ReceiptQty * #Price
SET #OnHandQty = #OnHandQty - #ReceiptQty
SET #TotalCost = #TotalCost + #Cost
END
ELSE
BEGIN
IF #OnHandQty < #ReceiptQty
BEGIN
--SELECT #ReceiptQty,#Price, 2,#OnHandQty
SET #Cost = #OnHandQty * #Price
SET #OnHandQty = 0
SET #TotalCost = #TotalCost + #Cost
BREAK;
END
END
FETCH Next FROM CostCursor INTO #ReceiptQty,#Price
END
CLOSE CostCursor
DEALLOCATE CostCursor
The system needs to go through and use the newest recieved inventory and price to determine what the paid for the on-hand is.
Ex. 1st Iteration: #OnHandQty = 8 RecievedQty = 5 Price = 1 UpdateDate = 1/20 Results: #HandQty = 3 #TotalCost = $5
2nd Iteration: #OnHandQty = 3 RecievedQty = 6 Price = 2 UpdateDate = 1/10 Results: #HandQty = 0 #TotalCost = $11
The Final Results tell me that the inventory I have on hand I paid $11 for. If I was doing this in C# or any other Object Oriented langauge this screams Recursion to me. I thought about a Recursive CTE could be more efficient. I've only successfully done any Recursive CTE's for Heirarchy following types of Queries and I haven't been able to successfully wrap my head around a query that would achieve this another way.
Any help or a simple thats how it has to be would be appreciated.

Here's a recursive CTE solution. A row number column has to be present to make it work. So I derived a new temp table (#temp_calculate2) containing a row number column. Ideally, the row number column would be present in #temp_calculate, but I don't know enough about your situation as to whether or not you can modify the structure of #temp_calculate.
It turns out there are four basic ways to calculate a running total in SQL Server 2005 and later: via a join, a subquery, a recursive CTE, and a cursor. I ran across a blog entry by Jerry Nixon that demonstrates the first three. The results are quite stunning. A recursive CTE is almost unbelievably fast compared to the join and subquery solutions.
Unfortunately, he didn't include a cursor solution. I created one and ran it on my computer using his example data. The cursor solution is only a little slower than the recursive CTE - 413ms vs. 273ms.
I don't know how much memory a cursor solution uses compared to a recursive CTE. I'm not good enough with SQL Profiler to get that data, but I'd be curious to see how the two approaches compare regarding memory usage.
SET NOCOUNT OFF;
DECLARE #temp_calculate TABLE
(
ReceiptQty INT,
Price FLOAT,
UpdateDate DATETIME
);
INSERT INTO #temp_calculate (ReceiptQty, Price, UpdateDate) VALUES (5, 1.0, '2012-1-20');
INSERT INTO #temp_calculate (ReceiptQty, Price, UpdateDate) VALUES (6, 2.0, '2012-1-10');
INSERT INTO #temp_calculate (ReceiptQty, Price, UpdateDate) VALUES (4, 3.0, '2012-1-08');
DECLARE #temp_calculate2 TABLE
(
RowNumber INT PRIMARY KEY,
ReceiptQty INT,
Price FLOAT
);
INSERT INTO #temp_calculate2
SELECT
RowNumber = ROW_NUMBER() OVER(ORDER BY UpdateDate DESC),
ReceiptQty,
Price
FROM
#temp_calculate;
;WITH LineItemCosts (RowNumber, ReceiptQty, Price, RemainingQty, LineItemCost)
AS
(
SELECT
RowNumber,
ReceiptQty,
Price,
8, -- OnHandQty
ReceiptQty * Price
FROM
#temp_calculate2
WHERE
RowNumber = 1
UNION ALL
SELECT
T2.RowNumber,
T2.ReceiptQty,
T2.Price,
LIC.RemainingQty - LIC.ReceiptQty,
(LIC.RemainingQty - LIC.ReceiptQty) * T2.Price
FROM
LineItemCosts AS LIC
INNER JOIN #temp_calculate2 AS T2 ON LIC.RowNumber + 1 = T2.RowNumber
)
/* Swap these SELECT statements to get a view of
all of the data generated by the CTE. */
--SELECT * FROM LineItemCosts;
SELECT
TotalCost = SUM(LineItemCost)
FROM
LineItemCosts
WHERE
LineItemCost > 0
OPTION
(MAXRECURSION 10000);

Here is one thing you can try. Admittedly, this isn't the type of thing I have to deal with real world, but I stay away from cursors. I took your temp table #temp_calculate and added an ID ordered by UPDATEDATE. You could also add the fields you are wanting in your output to your temp table - #HandQty and #TotalCost as well as a new one called IndividulaCost - and run this one query and use it to UPDATE #HandQty and IndividulaCost . Run one more UPDATE after, taking the same concept used here to get and update the total cost. (In fact you may be able to use some of this on your insert to your temp table and eliminate a step.)
I don't think it is great, but I do believe it is better than a cursor. Play with it and see what you think.
DECLARE #OnHandQty int
set #OnHandQty = 8
SELECT a.ID,
RECEIPTQty + TOTALOFFSET AS CURRENTOFFSET,
TOTALOFFSET,
CASE WHEN #OnHandQty - (RECEIPTQty + TOTALOFFSET) > 0 THEN RECEIPTQTY * PRICE
ELSE (#OnHandQty - TOTALOFFSET) * Price END AS CALCPRICE,
CASE WHEN #OnHandQty - RECEIPTQTY - TOTALOFFSET > 0 THEN #OnHandQty - RECEIPTQTY - TOTALOFFSET
ELSE 0 END AS HandQuantity
FROM SO_temp_calculate a
CROSS APPLY ( SELECT ISNULL(SUM(ReceiptQty), 0) AS TOTALOFFSET
FROM SO_temp_calculate B where a.id > b.id
) X
RETURNS:
ID CURRENTOFFSET TOTALOFFSET CALCPRICE HandQuantity
----------------------------------------------------------------
1 5 0 5 3
2 11 5 6 0
If you were using SQL SERVER 2012 you could use RANK functions with OVER clause and ROWS UNBOUNDED PRECEDING. Until you go there, this is one way to deal with Sliding Aggregations.

CREATE CLUSTERED INDEX IDX_C_RawData_ProductID_UpdateDate ON #RawData (ProductID ASC , UpdateDate DESC , RowNumber ASC)
DECLARE #TotalCost Decimal(30,5)
DECLARE #OnHandQty Decimal(18,5)
DECLARE #PreviousProductID Int
UPDATE #RawData
SET #TotalCost = TotalCost = CASE
WHEN RowNumber > 1
AND #OnHandQty >= ReceiptQuantity THEN #TotalCost + (ReceiptQuantity * Price)
WHEN RowNumber > 1
AND #OnHandQty < ReceiptQuantity THEN #TotalCost + (#OnHandQty * Price)
WHEN RowNumber = 1
AND OnHand >= ReceiptQuantity THEN (ReceiptQuantity * Price)
WHEN RowNumber = 1
AND OnHand < ReceiptQuantity THEN (OnHand * Price)
END
,#OnHandQty = OnHandQty = CASE
WHEN RowNumber > 1
AND #OnHandQty >= ReceiptQuantity THEN #OnHandQty - ReceiptQuantity
WHEN RowNumber > 1
AND #OnHandQty < ReceiptQuantity THEN 0
WHEN RowNumber = 1
AND OnHand >= ReceiptQuantity THEN (OnHand - ReceiptQuantity)
WHEN RowNumber = 1
AND OnHand < ReceiptQuantity THEN 0
END/*,
#PreviousProductID = ProductID*/
FROM #RawData WITH (TABLOCKX)
OPTION (MAXDOP 1)
Welp, this was the solution I ended up coming up with. I like to think the fine folks watching the #sqlhelp hashtag for pointing me to this article by Jeff Moden:
http://www.sqlservercentral.com/articles/T-SQL/68467/
I did end up having to use a Rownumber on the table because it wasn't getting the first set of cases correctly. Using this construct I brought retrieiving the dataset down from 17 minutes, best I been able to do, to 12 Seconds on my vastly slower dev box. I'm confident production will lower that even more.
I've tested the output and I get the exact same results as the old way except for when 2 items for the same product have different price and the update time is the exact same. One way may pick a different order then the other. It of 15,624 items that only happened once where the varience was >= a penny.
Thanks everyone who answered here. I ultimately went a different way but I wouldn't have found it without you.

Related

Query with row by row calculation for running total

I have a problem where jobs become 'due' at the start of a week and each week there are a certain number of 'slots' available to complete any outstanding jobs. If there are not enough slots then the jobs roll over to the next week.
My initial table looks like this:
Week
Slots
Due
23/8/2021
0
1
30/8/2021
2
3
6/9/2021
5
2
13/9/2021
1
4
I want to maintain a running total of the number of 'due' jobs at the end of each week.
Each week the number due would be added to the running total from last week, then the number of slots this week would be subtracted. If there are enough slots to do all the jobs required then the running total will be 0 (never negative).
As an example - the below shows how I would achieve this in javascript:
var Total = 0;
data.foreach(function(d){
Total += d.Due;
Total -= d.Slots;
Total = Total > 0 ? Total : 0;
d.Total = Total;
});
The result would be as below:
Week
Slots
Due
Total
23/8/2021
0
1
1
30/8/2021
2
3
2
6/9/2021
5
2
0
13/9/2021
1
4
3
Is it possible for me to achieve this in SQL (specifically SQL Server 2012)
I have tried various forms of sum(xxx) over (order by yyy)
Closest I managed was:
sum(Due) over (order by Week) - sum(Slots) over (order by Week) as Total
This provided a running total, but will provide a negative total when there are excess slots.
Is the only way to do this with a cursor? If so - any suggestions?
Thanks.
Possible answer(s) to my own question based on suggestions in comments.
Thorsten Kettner suggested a recursive query:
with cte as (
select [Week], [Due], [Slots]
,case when Due > Slots then Due - Slots else 0 end as [Total]
from [Data]
where [Week] = (select top 1 [Week] from [Data])
union all
select e.[Week], e.[Due], e.[Slots]
, case when cte.Total + e.Due - e.Slots > 0 then cte.Total + e.Due - e.Slots else 0 end as [Total]
from [Data] e
inner join cte on cte.[Week] = dateadd(day,-7,e.[Week])
)
select * from cte
OPTION (MAXRECURSION 200)
Thorsten - is this what you were suggesting? (If you have any improvements, please post as an answer so I can accept it!)
Presumably I have to ensure that MAXRECURSION is set to something higher than the number of rows I will be dealing with?
I am a little bit nervous about the join on dateadd(day,-7,e.[Week]). Would I be better doing something with Row_Number() to get the previous record? I may want to use something other than weeks, or weeks may be missing?
George Menoutis suggested a 'while' query and I was looking for ways to implement that when I came across this post: https://stackoverflow.com/a/35471328/1372848
This suggested that a cursor may not be all that bad compared to a while?
This is the cursor based version I came up with:
SET NOCOUNT ON;
DECLARE #Week Date,
#Due Int,
#Slots Int,
#Total Int = 0;
DECLARE #Output TABLE ([Week] Date NOT NULL, Due Int NOT NULL, Slots Int NOT NULL, Total Int);
DECLARE crs CURSOR STATIC LOCAL READ_ONLY FORWARD_ONLY
FOR SELECT [Week], Due, Slots
FROM [Data]
ORDER BY [Week] ASC;
OPEN crs;
FETCH NEXT
FROM crs
INTO #Week, #Due, #Slots;
WHILE (##FETCH_STATUS = 0)
BEGIN
Set #Total = #Total + #Due;
Set #Total = #Total - #Slots;
Set #Total = IIF(#Total > 0, #Total , 0)
INSERT INTO #Output ([Week], [Due], [Slots], [Total])
VALUES (#Week, #Due, #Slots, #Total);
FETCH NEXT
FROM crs
INTO #Week, #Due, #Slots;
END;
CLOSE crs;
DEALLOCATE crs;
SELECT *
FROM #Output;
Both of these seem to work as intended. The recursive query feels better (cursors = bad etc), but is it designed to be used this way (with a recursion for every input row and therefore potentially a very high number of recursions?)
Many thanks for everyone's input :-)
Improvement on previous answer following input from Thorsten
with numbered as (
select *, ROW_NUMBER() OVER (ORDER BY [Week]) as RN
from [Data]
)
,cte as (
select [Week], [Due], [Slots], [RN]
,case when Due > Slots then Due - Slots else 0 end as [Total]
from numbered
where RN = 1
union all
select e.[Week], e.[Due], e.[Slots], e.[RN]
, case when cte.Total + e.Due - e.Slots > 0 then cte.Total + e.Due - e.Slots else 0 end as [Total]
from numbered e
inner join cte on cte.[RN] = e.[RN] - 1
)
select * from cte
OPTION (MAXRECURSION 0)
Many thanks Thorsten for all your help.

SQL Server - loop through table and update based on count

I have a SQL Server database. I need to loop through a table to get the count of each value in the column 'RevID'. Each value should only be in the table a certain number of times - for example 125 times. If the count of the value is greater than 125 or less than 125, I need to update the column to ensure all values in the RevID (are over 25 different values) is within the same range of 125 (ok to be a few numbers off)
For example, the count of RevID = "A2" is = 45 and the count of RevID = 'B2' is = 165 then I need to update RevID so the 45 count increases and the 165 decreases until they are within the 125 range.
This is what I have so far:
DECLARE #i INT = 1,
#RevCnt INT = SELECT RevId, COUNT(RevId) FROM MyTable group by RevId
WHILE(#RevCnt >= 50)
BEGIN
UPDATE MyTable
SET RevID= (SELECT COUNT(RevID) FROM MyTable)
WHERE RevID < 50)
#i = #i + 1
END
I have also played around with a cursor and instead of trigger. Any idea on how to achieve this? Thanks for any input.
Okay I cam back to this because I found it interesting even though clearly there are some business rules/discussion that you and I and others are not seeing. anyway, if you want to evenly and distribute arbitrarily there are a few ways you could do it by building recursive Common Table Expressions [CTE] or by building temp tables and more. Anyway here is a way that I decided to give it a try, I did utilize 1 temp table because sql was throwing in a little inconsistency with the main logic table as a cte about every 10th time but the temp table seems to have cleared that up. Anyway, this will evenly spread RevId arbitrarily and randomly assigning any remainder (# of Records / # of RevIds) to one of the RevIds. This script also doesn't rely on having a UniqueID or anything it works dynamically over row numbers it creates..... here you go just subtract out test data etc and you have what you more than likely want. Though rebuilding the table/values would probably be easier.
--Build Some Test Data
DECLARE #Table AS TABLE (RevId VARCHAR(10))
DECLARE #C AS INT = 1
WHILE #C <= 400
BEGIN
IF #C <= 200
BEGIN
INSERT INTO #Table (RevId) VALUES ('A1')
END
IF #c <= 170
BEGIN
INSERT INTO #Table (RevId) VALUES ('B2')
END
IF #c <= 100
BEGIN
INSERT INTO #Table (RevId) VALUES ('C3')
END
IF #c <= 400
BEGIN
INSERT INTO #Table (RevId) VALUES ('D4')
END
IF #c <= 1
BEGIN
INSERT INTO #Table (RevId) VALUES ('E5')
END
SET #C = #C+ 1
END
--save starting counts of test data to temp table to compare with later
IF OBJECT_ID('tempdb..#StartingCounts') IS NOT NULL
BEGIN
DROP TABLE #StartingCounts
END
SELECT
RevId
,COUNT(*) as Occurences
INTO #StartingCounts
FROM
#Table
GROUP BY
RevId
ORDER BY
RevId
/************************ This is the main method **********************************/
--clear temp table that is the main processing logic
IF OBJECT_ID('tempdb..#RowNumsToChange') IS NOT NULL
BEGIN
DROP TABLE #RowNumsToChange
END
--figure out how many records there are and how many there should be for each RevId
;WITH cteTargetNumbers AS (
SELECT
RevId
--,COUNT(*) as RevIdCount
--,SUM(COUNT(*)) OVER (PARTITION BY 1) / COUNT(*) OVER (PARTITION BY 1) +
--CASE
--WHEN ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY NEWID()) <=
--SUM(COUNT(*)) OVER (PARTITION BY 1) % COUNT(*) OVER (PARTITION BY 1)
--THEN 1
--ELSE 0
--END as TargetNumOfRecords
,SUM(COUNT(*)) OVER (PARTITION BY 1) / COUNT(*) OVER (PARTITION BY 1) +
CASE
WHEN ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY NEWID()) <=
SUM(COUNT(*)) OVER (PARTITION BY 1) % COUNT(*) OVER (PARTITION BY 1)
THEN 1
ELSE 0
END - COUNT(*) AS NumRecordsToUpdate
FROM
#Table
GROUP BY
RevId
)
, cteEndRowNumsToChange AS (
SELECT *
,SUM(CASE WHEN NumRecordsToUpdate > 1 THEN NumRecordsToUpdate ELSE 0 END)
OVER (PARTITION BY 1 ORDER BY RevId) AS ChangeEndRowNum
FROM
cteTargetNumbers
)
SELECT
*
,LAG(ChangeEndRowNum,1,0) OVER (PARTITION BY 1 ORDER BY RevId) as ChangeStartRowNum
INTO #RowNumsToChange
FROM
cteEndRowNumsToChange
;WITH cteOriginalTableRowNum AS (
SELECT
RevId
,ROW_NUMBER() OVER (PARTITION BY RevId ORDER BY (SELECT 0)) as RowNumByRevId
FROM
#Table t
)
, cteRecordsAllowedToChange AS (
SELECT
o.RevId
,o.RowNumByRevId
,ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY (SELECT 0)) as ChangeRowNum
FROM
cteOriginalTableRowNum o
INNER JOIN #RowNumsToChange t
ON o.RevId = t.RevId
AND t.NumRecordsToUpdate < 0
AND o.RowNumByRevId <= ABS(t.NumRecordsToUpdate)
)
UPDATE o
SET RevId = u.RevId
FROM
cteOriginalTableRowNum o
INNER JOIN cteRecordsAllowedToChange c
ON o.RevId = c.RevId
AND o.RowNumByRevId = c.RowNumByRevId
INNER JOIN #RowNumsToChange u
ON c.ChangeRowNum > u.ChangeStartRowNum
AND c.ChangeRowNum <= u.ChangeEndRowNum
AND u.NumRecordsToUpdate > 0
IF OBJECT_ID('tempdb..#RowNumsToChange') IS NOT NULL
BEGIN
DROP TABLE #RowNumsToChange
END
/***************************** End of Main Method *******************************/
-- Compare the results and clean up
;WITH ctePostUpdateResults AS (
SELECT
RevId
,COUNT(*) as AfterChangeOccurences
FROM
#Table
GROUP BY
RevId
)
SELECT *
FROM
#StartingCounts s
INNER JOIN ctePostUpdateResults r
ON s.RevId = r.RevId
ORDER BY
s.RevId
IF OBJECT_ID('tempdb..#StartingCounts') IS NOT NULL
BEGIN
DROP TABLE #StartingCounts
END
Since you've given no rules for how you'd like the balance to operate we're left to speculate. Here's an approach that would find the most overrepresented value and then find an underrepresented value that can take on the entire overage.
I have no idea how optimal this is and it will probably run in an infinite loop without more logic.
declare #balance int = 125;
declare #cnt_over int;
declare #cnt_under int;
declare #revID_overrepresented varchar(32);
declare #revID_underrepresented varchar(32);
declare #rowcount int = 1;
while #rowcount > 0
begin
select top 1 #revID_overrepresented = RevID, #cnt_over = count(*)
from T
group by RevID
having count(*) > #balance
order by count(*) desc
select top 1 #revID_underrepresented = RevID, #cnt_under = count(*)
from T
group by RevID
having count(*) < #balance - #cnt_over
order by count(*) desc
update top #cnt_over - #balance T
set RevId = #revID_underrepresented
where RevId = #revID_overrepresented;
set #rowcount = ##rowcount;
end
The problem is I don't even know what you mean by balance...You say it needs to be evenly represented but it seems like you want it to be 125. 125 is not "even", it is just 125.
I can't tell what you are trying to do, but I'm guessing this is not really an SQL problem. But you can use SQL to help. Here is some helpful SQL for you. You can use this in your language of choice to solve the problem.
Find the rev values and their counts:
SELECT RevID, COUNT(*)
FROM MyTable
GROUP BY MyTable
Update #X rows (with RevID of value #RevID) to a new value #NewValue
UPDATE TOP #X FROM MyTable
SET RevID = #NewValue
WHERE RevID = #RevID
Using these two queries you should be able to apply your business rules (which you never specified) in a loop or whatever to change the data.

Group data without changing query flow

For me it's hard to explait what do I want so article's name may be unclear, but I hope I can describe it with code.
I have some data with two most important value, so let it be time t and value f(t). It's stored in the table, for example
1 - 1000
2 - 1200
3 - 1100
4 - 1500
...
I want to plot a graph using it, and this graph should contain N points. If table has rows less than this N, then we just return this table. But if it hasn't, we should group this points, for example, N = Count/2, then for an example above:
1 - (1000+1200)/2 = 1100
2 - (1100+1500)/2 = 1300
...
I wrote an SQL script (it works fine for N >> Count) (MonitoringDateTime - is t, and ResultCount if f(t))
ALTER PROCEDURE [dbo].[usp_GetRequestStatisticsData]
#ResourceTypeID bigint,
#DateFrom datetime,
#DateTo datetime,
#EstimatedPointCount int
AS
BEGIN
SET NOCOUNT ON;
SET ARITHABORT ON;
declare #groupSize int;
declare #resourceCount int;
select #resourceCount = Count(*)
from ResourceType
where ID & #ResourceTypeID > 0
SELECT d.ResultCount
,MonitoringDateTime = d.GeneratedOnUtc
,ResourceType = a.ResourceTypeID,
ROW_NUMBER() OVER(ORDER BY d.GeneratedOnUtc asc) AS Row
into #t
FROM dbo.AgentData d
INNER JOIN dbo.Agent a ON a.CheckID = d.CheckID
WHERE d.EventType = 'Result' AND
a.ResourceTypeID & #ResourceTypeID > 0 AND
d.GeneratedOnUtc between #DateFrom AND #DateTo AND
d.Result = 1
select #groupSize = Count(*) / (#EstimatedPointCount * #resourceCount)
from #t
if #groupSize = 0 -- return all points
select ResourceType, MonitoringDateTime, ResultCount
from #t
else
select ResourceType, CAST(AVG(CAST(#t.MonitoringDateTime AS DECIMAL( 18, 6))) AS DATETIME) MonitoringDateTime, AVG(ResultCount) ResultCount
from #t
where [Row] % #groupSize = 0
group by ResourceType, [Row]
order by MonitoringDateTime
END
, but it's doesn't work for N ~= Count, and spend a lot of time for inserts.
This is why I wanted to use CTE's, but it doesn't work with if else statement.
So i calculated a formula for a group number (for use it in GroupBy clause), because we have
GroupNumber = Count < N ? Row : Row*NumberOfGroups
where Count - numer of rows in the table, and NumberOfGroups = Count/EstimatedPointCount
using some trivial mathematics we get a formula
GroupNumber = Row + (Row*Count/EstimatedPointCount - Row)*MAX(Count - Count/EstimatedPointCount,0)/(Count - Count/EstimatedPointCount)
but it doesn't work because of Count aggregate function:
Column 'dbo.AgentData.ResultCount' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
My english is very bad and I know it (and i'm trying to improve it), but hope dies last, so please advice.
results of query
SELECT d.ResultCount
, MonitoringDateTime = d.GeneratedOnUtc
, ResourceType = a.ResourceTypeID
FROM dbo.AgentData d
INNER JOIN dbo.Agent a ON a.CheckID = d.CheckID
WHERE d.GeneratedOnUtc between '2015-01-28' AND '2015-01-30' AND
a.ResourceTypeID & 1376256 > 0 AND
d.EventType = 'Result' AND
d.Result = 1
https://onedrive.live.com/redir?resid=58A31FC352FC3D1A!6118&authkey=!AATDebemNJIgHoo&ithint=file%2ccsv
Here's an example using NTILE and your simple sample data at the top of your question:
declare #samples table (ID int, sample int)
insert into #samples (ID,sample) values
(1,1000),
(2,1200),
(3,1100),
(4,1500)
declare #results int
set #results = 2
;With grouped as (
select *,NTILE(#results) OVER (order by ID) as nt
from #samples
)
select nt,AVG(sample) from grouped
group by nt
Which produces:
nt
-------------------- -----------
1 1100
2 1300
If #results is changed to 4 (or any higher number) then you just get back your original result set.
Unfortunately, I don't have your full data nor can I fully understand what you're trying to do with the full stored procedure, so the above would probably need to be adapted somewhat.
I haven't tried it, but how about instead of
select ResourceType, CAST(AVG(CAST(#t.MonitoringDateTime AS DECIMAL( 18, 6))) AS DATETIME) MonitoringDateTime, AVG(ResultCount) ResultCount
from #t
where [Row] % #groupSize = 0
group by ResourceType, [Row]
order by MonitoringDateTime
perhaps something like
select ResourceType, CAST(AVG(CAST(#t.MonitoringDateTime AS DECIMAL( 18, 6))) AS DATETIME) MonitoringDateTime, AVG(ResultCount) ResultCount
from #t
group by ResourceType, convert(int,[Row]/#groupSize)
order by MonitoringDateTime
Maybe that points you in some new direction? by converting to int we are truncating everything after the decimal so Im hoping that will give you a better grouping? you might need to put your row-number over resource type for this to work?

SQL query with start and end dates - what is the best option?

I am using MS SQL Server 2005 at work to build a database. I have been told that most tables will hold 1,000,000 to 500,000,000 rows of data in the near future after it is built... I have not worked with datasets this large. Most of the time I don't even know what I should be considering to figure out what the best answer might be for ways to set up schema, queries, stuff.
So... I need to know the start and end dates for something and a value that is associated with in ID during that time frame. SO... we can the table up two different ways:
create table xxx_test2 (id int identity(1,1), groupid int, dt datetime, i int)
create table xxx_test2 (id int identity(1,1), groupid int, start_dt datetime, end_dt datetime, i int)
Which is better? How do I define better? I filled the first table with about 100,000 rows of data and it takes about 10-12 seconds to set up in the format of the second table depending on the query...
select y.groupid,
y.dt as [start],
z.dt as [end],
(case when z.dt is null then 1 else 0 end) as latest,
y.i
from #x as y
outer apply (select top 1 *
from #x as x
where x.groupid = y.groupid and
x.dt > y.dt
order by x.dt asc) as z
or
http://consultingblogs.emc.com/jamiethomson/archive/2005/01/10/t-sql-deriving-start-and-end-date-from-a-single-effective-date.aspx
Buuuuut... with the second table.... to insert a new row, I have to go look and see if there is a previous row and then if so update its end date. So... is it a question of performance when retrieving data vs insert/update things? It seems silly to store that end date twice but maybe...... not? What things should I be looking at?
this is what i used to generate my fake data... if you want to play with it for some reason (if you change the maximum of the random number to something higher it will generate the fake stuff a lot faster):
declare #dt datetime
declare #i int
declare #id int
set #id = 1
declare #rowcount int
set #rowcount = 0
declare #numrows int
while (#rowcount<100000)
begin
set #i = 1
set #dt = getdate()
set #numrows = Cast(((5 + 1) - 1) *
Rand() + 1 As tinyint)
while #i<=#numrows
begin
insert into #x values (#id, dateadd(d,#i,#dt), #i)
set #i = #i + 1
end
set #rowcount = #rowcount + #numrows
set #id = #id + 1
print #rowcount
end
For your purposes, I think option 2 is the way to go for table design. This gives you flexibility, and will save you tons of work.
Having the effective date and end date will allow you to have a query that will only return currently effective data by having this in your where clause:
where sysdate between effectivedate and enddate
You can also then use it to join with other tables in a time-sensitive way.
Provided you set up the key properly and provide the right indexes, performance (on this table at least) should not be a problem.
for anyone who can use LEAD Analytic function of SQL Server 2012 (or Oracle, DB2, ...), retrieving data from the 1st table (that uses only 1 date column) would be much much quicker than without this feature:
select
groupid,
dt "start",
lead(dt) over (partition by groupid order by dt) "end",
case when lead(dt) over (partition by groupid order by dt) is null
then 1 else 0 end "latest",
i
from x

Split query result by half in TSQL (obtain 2 resultsets/tables)

I have a query that returns a large number of heavy rows.
When I transform this rows in a list of CustomObject I have a big memory peak, and this transformation is made by a custom dotnet framework that I can't modify.
I need to retrieve a less number of rows to do "the transform" in two passes and then avoid the memory peak.
How can I split the result of a query by half? I need to do it in DB layer. I thing to do a "Top count(*)/2" but how to get the other half?
Thank you!
If you have identity field in the table, select first even ids, then odd ones.
select * from Table where Id % 2 = 0
select * from Table where Id % 2 = 1
You should have roughly 50% rows in each set.
Here is another way to do it from(http://www.tek-tips.com/viewthread.cfm?qid=1280248&page=5). I think it's more efficient:
Declare #Rows Int
Declare #TopRows Int
Declare #BottomRows Int
Select #Rows = Count(*) From TableName
If #Rows % 2 = 1
Begin
Set #TopRows = #Rows / 2
Set #BottomRows = #TopRows + 1
End
Else
Begin
Set #TopRows = #Rows / 2
Set #BottomRows = #TopRows
End
Set RowCount #TopRows
Select * From TableName Order By DisplayOrder
Set RowCount #BottomRows
Select * From TableNameOrder By DisplayOrderDESC
--- old answer below ---
Is this a stored procedure call or dynamic sql? Can you use temp tables?
if so, something like this would work
select row_number() OVER(order by yourorderfield) as rowNumber, *
INTO #tmp
FROM dbo.yourtable
declare #rowCount int
SELECT #rowCount = count(1) from #tmp
SELECT * from #tmp where rowNumber <= #rowCount / 2
SELECT * from #tmp where rowNumber > #rowCount / 2
DROP TABLE #tmp
SELECT TOP 50 PERCENT WITH TIES ... ORDER BY SomeThing
then
SELECT TOP 50 PERCENT ... ORDER BY SomeThing DESC
However, unless you snapshot the data first, a row in the middle may slip through or be processed twice
I don't think you should do that in SQL, unless you will always have a possibility to have the same record 2 times.
I would do it in an "software" programming language, not SQL. Java, .NET, C++, etc...