How to use previous row's column's value for calculating the next row's column's value - sql

I have a table
Id | Aisle | OddEven | Bay | Size | Y-Axis
3 | A1 | Even | 14 | 10 | 100
1 | A1 | Even | 16 | 10 |
6 | A1 | Even | 20 | 10 |
12 | A1 | Even | 26 | 5 | 150
10 | A1 | Even | 28 | 5 |
11 | A1 | Even | 32 | 5 |
2 | A1 | Odd | 13 | 10 | 100
5 | A1 | Odd | 17 | 10 |
4 | A1 | Odd | 19 | 10 |
9 | A1 | Odd | 23 | 5 | 150
7 | A1 | Odd | 25 | 5 |
8 | A1 | Odd | 29 | 5 |
want to look like this
Id | Aisle | OddEven | Bay | Size | Y-Axis
1 | A1 | Even | 14 | 10 | 100
2 | A1 | Even | 16 | 10 | 110
3 | A1 | Even | 20 | 10 | 120
4 | A1 | Even | 26 | 5 | 150
5 | A1 | Even | 28 | 5 | 155
6 | A1 | Even | 32 | 5 | 160
7 | A1 | Odd | 13 | 10 | 100
8 | A1 | Odd | 17 | 10 | 110
9 | A1 | Odd | 19 | 10 | 120
10 | A1 | Odd | 23 | 5 | 150
11 | A1 | Odd | 25 | 5 | 155
12 | A1 | Odd | 29 | 5 | 160
I need a select query and update query. What its doing is there are already some Y-Axis Number been filled (at the start of the Odd/Even) then I need to take the previous row's Y-Axis column's value and adds to the current rows's size which = to current Y-Axis. Needs to keep doing it until it finds another Y-Axis has the value it skips the calculation and next row is using that number.
My thinking process is this:
Id will definitely be used, however, the Id is not sequence as shown my example
so I need to have
ROW_Number OVER (PARTITION BY Aisle,OddEven,Bay Order BY Aisle,OddEven,Bay)
Then some kind of JOIN the same table but the ON is T1.RN = T2.RN - 1
Where I am stuck is but the first row has not previous value it will try to update that value.
Anyone have an idea for SQL Query 2008 for Select and Update will be greatly appreciated! Thanks.

You seem to want a cumulative sum. This would be easier in SQL Server 2012+. You can do this in SQL Server 2008 using outer apply:
select t.*, cume_value
from t outer apply
(select sum(size) + sum(yaxis) as cume_value
from t t2
where t2.aisle = t.aisle and t2.oddeven = t.oddeven and
t2.bay < t.bay
) t2;

A little more difficult on 2008, but I think this is what you are looking for
Declare #Table table (Id int,Aisle varchar(25),OddEven varchar(25),Bay int,Size int,[Y-Axis] int)
Insert Into #Table values
(3,'A1','Even',14,10 ,100),
(1,'A1','Even',16,10 ,0),
(6,'A1','Even',20,10 ,0),
(12,'A1','Even',26,5,150),
(10,'A1','Even',28,5,0),
(11,'A1','Even',32,5,0),
(2,'A1','Odd',13,10 ,100),
(5,'A1','Odd',17,10 ,0),
(4,'A1','Odd',19,10 ,0),
(9,'A1','Odd',23,5,150),
(7,'A1','Odd',25,5,0),
(8,'A1','Odd',29,5,0)
;with cteBase as (
Select *
,IDNew=Row_Number() over (Order By Aisle,Bay)
,RowNr=Row_Number() over (Order By Aisle,OddEven,Bay)
From #Table
)
, cteGroup as (Select TmpRowNr=RowNr,GrpNr=Row_Number() over (Order By RowNr) from cteBase where [Y-Axis]>0)
, cteFinal as (
Select A.*
,GrpNr = (Select max(GrpNr) from cteGroup Where TmpRowNr<=RowNr)
From cteBase A
)
Select ID=Row_Number() over (Order By A.OddEven,A.Bay)
,A.Aisle
,A.OddEven
,A.Bay
,A.Size
,[Y-Axis] = Sum(case when B.[Y-Axis]>0 then B.[Y-Axis] else B.Size end)
From cteFinal A
Join cteFinal B on (B.RowNr<=A.RowNr and A.GrpNr=B.GrpNr)
Group By
A.IDNew
,A.Aisle
,A.OddEven
,A.Bay
,A.Size
Order By A.OddEven,A.Bay
Returns
ID Aisle OddEven Bay Size Y-Axis
1 A1 Even 14 10 100
2 A1 Even 16 10 110
3 A1 Even 20 10 120
4 A1 Even 26 5 150
5 A1 Even 28 5 155
6 A1 Even 32 5 160
7 A1 Odd 13 10 100
8 A1 Odd 17 10 110
9 A1 Odd 19 10 120
10 A1 Odd 23 5 150
11 A1 Odd 25 5 155
12 A1 Odd 29 5 160

I gotta leave my computer so update query should be easy to move on from here.
Below is the select query;
select row_number() over (order by oddeven,bay) id,
Aisle,
OddEven,
Bay,
Size,
max(ISNULL([Y-Axis],0)) over (partition by Aisle, OddEven,Size order by bay)
+ sum(CASE WHEN [Y-Axis] is null THEN Size ELSE 0 END) over (partition by Aisle,OddEven,size order by Bay) as [Y-Axis]
from oddseven
order by id

Related

Selecting a column such as a player only once first by a max value then by a min value

So I've two tables 'AllBowlRecords' and one 'AggregateBowlRecords'
AllBowlRecords :-
plr_fullnm|Wkts|Runs
---------------------
Bumrah | 4 | 23
Bumrah | 2 | 7
Bumrah | 1 | 51
Bumrah | 4 | 39
Jason | 3 | 48
Jason | 3 | 29
Jason | 3 | 70
So all I want is to update AggregateBowlRecords based on AllBowlRecords where Wkts is MAX, but if there's multiple occurrences of MAX Wkts value, then whichever corresponds minimum runs should be selected. And AggregateBowlRecords should look like this:
Bumrah | 4 | 23
Jason | 3 | 29
What are the possible solutions?
You can return the results you want using a query with row_number():
select plr_fullnm, Wkts, Runs
from (select abr.*,
row_number() over (partition by plr_fullnm order by wkts desc, runs) as seqnum
from AllBowlRecords abr
) abr
where seqnum = 1;

Distribute sequential SQL results evenly based on count

I have SQL results that I need to break into item ranges and the count distributed evenly across a number of tasks. What is a good way to do this?
My data looks like this.
+------+-------+----------+
| Item | Count | ItmGroup |
+------+-------+----------+
| 1A | 100 | 1 |
| 1B | 25 | 1 |
| 1C | 2 | 1 |
| 1D | 6 | 1 |
| 2A | 88 | 2 |
| 2B | 10 | 2 |
| 2C | 122 | 2 |
| 2D | 12 | 2 |
| 3A | 4 | 3 |
| 3B | 103 | 3 |
| 3C | 1 | 3 |
| 3D | 22 | 3 |
| 4A | 55 | 4 |
| 4B | 42 | 4 |
| 4C | 100 | 4 |
| 4D | 1 | 4 |
+------+-------+----------+
Item = the item code.
Count = this context it is determining the popularity of the item. This can be used to RANK items if need be.
ItmGroup - this is a parent value for the Itm column. Item is contained in a Group.
What differentiates this from other similar questions I'veviewed is that the ranges I need to determine cannot be taken out of the order they show in this table. We can do Item Range from A1 to B3, in other words, they can cross over ItmGroups, but they must remain in alphanumeric order by Item.
The expected result would be item ranges that evenly distribute the total count.
+------+-------+----------+
| FrItem | ToItem | TotCount|
+------+-------+----------+
| 1A | 2D | 134 |
| 3A | 3D | 130 |
(etc)
Provided you've happy with a rough estimate, this will split the data in to two groups.
The first group will always have as many records as possible, but no more than half of the total count (and group 2 will have the rest).
WITH
cumulative AS
(
SELECT
*,
SUM([Count]) OVER (ORDER BY Item) AS cumulativeCount,
SUM([Count]) OVER () AS totalCount
FROM
yourData
)
SELECT
MIN(item) AS frItem,
MAX(item) AS toItem,
SUM([Count]) AS TotCount
FROM
cumulative
GROUP BY
CASE WHEN cumulativeCount <= totalCount / 2 THEN 0 ELSE 1 END
ORDER BY
CASE WHEN cumulativeCount <= totalCount / 2 THEN 0 ELSE 1 END
To split the data in to 5 portions, it's similar...
GROUP BY
CASE WHEN cumulativeCount <= totalCount * 1/5 THEN 0
WHEN cumulativeCount <= totalCount * 2/5 THEN 1
WHEN cumulativeCount <= totalCount * 3/5 THEN 2
WHEN cumulativeCount <= totalCount * 4/5 THEN 3
ELSE 4 END
Depending on your data this isn't necessarily ideal
Item | Count GroupAsDefinedAbove IdealGroup
------+-------
1A | 4 1 1
2A | 5 2 1
3A | 8 2 2
If you want something that can get the two groups as close in size as possible, that's a lot more complex.
Same as the accepted answer, except declaring a batch number and an addition to the select statement in the WITH cumulativeCte to prevent a remainder.
DECLARE #BatchCount NUMERIC(4,2) = 5.00;
WITH
cumulativeCte AS
(
SELECT
*,
SUM(r.[Count]) OVER (ORDER BY Item) AS cumulativeCount,
SUM(r.[Count]) OVER () AS totalCount
,CEILING(SUM(r.[Count]) OVER (ORDER BY IM.MMITNO ASC) / (SUM(r.[Count]) OVER () / #BatchCount)) AS BatchNo
FROM
records r
)
SELECT
MIN(c.Item) AS frItem,
MAX(c.Item) AS toItem,
SUM(c.[Count]) AS TotCount,
c.BatchNo
FROM
cumulativeCte c
GROUP BY
c.BatchNo
ORDER BY
c.BatchNo

Query to get the count of data for particular customer with all other data from table

My table structure is as follows:
group_id | cust_id | ticket_num
------------------------------
60 | 12 | 1
60 | 12 | 2
60 | 12 | 3
60 | 12 | 4
60 | 30 | 5
60 | 30 | 6
60 | 31 | 7
60 | 31 | 8
65 | 02 | 1
I want to fetch all the data for group_id=60 and find the count of ticket_num for each customer in that group. My output should be like this:
cust_id | ticket_count | ticket_num
------------------------------
12 | 4 | 1
12 | | 2
12 | | 3
12 | | 4
30 | 2 | 5
30 | | 6
31 | 2 | 7
31 | | 8
I tried this query:
SELECT gd.cust_id, Count(gd.cust_id),gd.ticket_num
FROM Group_details gd
WHERE gd.group_id = 65
GROUP BY gd.cust_id;
But this query is not working.
You appear to want the ANSI/ISO standard row_number() functions and count() as a window function:
select gd.cust_id, count(*) over (partition by gd.cust_id) as num_tickets,
row_number() over (order by gd.cust_id) as ticket_seqnum
from group_details gd
where gd.group_id = 60;
use aggregate and subquery
select t2.*,t1.ticket_num from Group_details t1
inner join
(
SELECT gd.cust_id, Count(gd.ticket_num) as ticket_count
FROM Group_details gd where gd.group_id = 60
GROUP BY gd.cust_id
) t2 on t1.cust_id=t2.cust_id
http://sqlfiddle.com/#!9/dd718b/1

Record batching on bases of running total values by specific number (FileSize wise batching)

We are dealing with large recordset and are currently using NTILE() to get the range of FileIDs and then using FileID column in BETWEEN clause to get specific records set. Using FileID in BETWEEN clause is a mandatory requirement from Developers. So, we cannot have random FileIDs in one batch, it has to be incremental.
As per new requirement, we have to make range based on FileSize column, e.g. 100 GB per batch.
For example:
Batch 1 : 1 has 100 size So ID: 1 record only.
Batch 2 : 2,3,4,5 = 80 but it is < 100 GB, so have to take FileId 6 if 120 GB (Total 300 GB)
Batch 3 : 7 ID has > 100 so 1 record only
And so on…
Below are my sample code, but it is not giving the expected result:
CREATE TABLE zFiles
(
FileId INT
,FileSize INT
)
INSERT INTO dbo.zFiles (
FileId
,FileSize
)
VALUES (1, 100)
,(2, 20)
,(3, 20)
,(4, 30)
,(5, 10)
,(6, 120)
,(7, 400)
,(8, 50)
,(9, 100)
,(10, 60)
,(11, 40)
,(12, 5)
,(13, 20)
,(14, 95)
,(15, 40)
DECLARE #intBatchSize FLOAT = 100;
SELECT y.FileID ,
y.FileSize ,
y.RunningTotal ,
DENSE_RANK() OVER (ORDER BY CEILING(RunningTotal / #intBatchSize)) Batch
FROM ( SELECT i.FileID ,
i.FileSize ,
RunningTotal = SUM(i.FileSize) OVER ( ORDER BY i.FileID ) -- RANGE UNBOUNDED PRECEDING)
FROM dbo.zFiles AS i WITH ( NOLOCK )
) y
ORDER BY y.FileID;
Result:
+--------+----------+--------------+-------+
| FileID | FileSize | RunningTotal | Batch |
+--------+----------+--------------+-------+
| 1 | 100 | 100 | 1 |
| 2 | 20 | 120 | 2 |
| 3 | 20 | 140 | 2 |
| 4 | 30 | 170 | 2 |
| 5 | 10 | 180 | 2 |
| 6 | 120 | 300 | 3 |
| 7 | 400 | 700 | 4 |
| 8 | 50 | 750 | 5 |
| 9 | 100 | 850 | 6 |
| 10 | 60 | 910 | 7 |
| 11 | 40 | 950 | 7 |
| 12 | 5 | 955 | 7 |
| 13 | 20 | 975 | 7 |
| 14 | 95 | 1070 | 8 |
| 15 | 40 | 1110 | 9 |
+--------+----------+--------------+-------+
Expected Result:
+--------+---------------+---------+
| FileID | FileSize (GB) | BatchNo |
+--------+---------------+---------+
| 1 | 100 | 1 |
| 2 | 20 | 2 |
| 3 | 20 | 2 |
| 4 | 30 | 2 |
| 5 | 10 | 2 |
| 6 | 120 | 2 |
| 7 | 400 | 3 |
| 8 | 50 | 4 |
| 9 | 100 | 4 |
| 10 | 60 | 5 |
| 11 | 40 | 5 |
| 12 | 5 | 6 |
| 13 | 20 | 6 |
| 14 | 95 | 6 |
| 15 | 40 | 7 |
+--------+---------------+---------+
We can achieve this if somehow we can reset the running total once it gets over 100. We can write a loop to have this result, but for that we need to go record by record, which is time consuming.
Please somebody help us on this?
You need to do this with a recursive CTE:
with cte as (
select z.fileid, z.filesize, z.filesize as batch_filesize, 1 as batchnum
from zfiles z
where z.fileid = 1
union all
select z.fileid, z.filesize,
(case when cte.batch_filesize + z.filesize > #intBatchSize
then z.filesize
else cte.batch_filesize + z.filesize
end),
(case when cte.batch_filesize + z.filesize > #intBatchSize
then cte.batchnum + 1
else cte.batchnum
end)
from cte join
zfiles z
on z.fileid = cte.fileid + 1
)
select *
from cte;
Note: I realize that fileid probably is not a sequence. You can create a sequence using row_number() in a CTE, to make this work.
There is a technical reason why running sums don't work for this. Essentially, any given fileid needs to know the breaks before it.
Small modification on above answered by Gordon Linoff and got expected result.
DECLARE #intBatchSize INT = 100
;WITH cte as (
select z.fileid, z.filesize, z.filesize as batch_filesize, 1 as batchnum
from zfiles z
where z.fileid = 1
union all
select z.fileid, z.filesize,
(case when cte.batch_filesize >= #intBatchSize
then z.filesize
else cte.batch_filesize + z.filesize
end),
(case when cte.batch_filesize >= #intBatchSize
then cte.batchnum + 1
else cte.batchnum
end)
from cte join
zfiles z
on z.fileid = cte.fileid + 1
)
select *
from cte;

SQL Server 2008 - accumulating column

I would like to accumulate my data as you can see below there is origin table table1:
What is the best query for to do this?
Is possible to do this dynamically - when I add more types of terms??
Table 1
ID | term | value
-----------------------
1 | I | 100
2 | I | 200
3 | II | 100
4 | II | 50
5 | II | 75
6 | III | 50
7 | III | 65
8 | IV | 30
9 | IV | 45
And the result should be like below:
YTD | Acc Value
------------------
I-I | 300
I-II | 525
I-III| 640
I-IV | 715
Thanks
select
(select min(term) from yourtable ) +'-'+term,
(select sum(value) from yourtable t1 where t1.term<=t.term)
from yourtable t
group by term