I have a Table
With the below data
Table X
Seq_no A Claim Payment Balance (to be calculated)
1 abc 100 10 90 (100-10)
2 abc 50 40 (90-50)
3 abc 20 20 (40-20)
1 xyz 150 10 140 (150-10)
1 qwe 200 10 190 (200-10)
I need to calculate the column Balance.
I am trying with the below Query
SQL >
Select
Seq_no, a, Claim, Payment,
CASE
When Seq_no =1
then (claim-Payment)
Else ( lag(Balance)- Payment over (order by Balance))
END as Balance
from table X
However i am getting a error
ORA-00904: "Balance": invalid identifier
00904. 00000 - "%s: invalid identifier"
I believe this is because Balance is not an existing column name.
Is there a correct way to achieve the results.?
Update:
*I missed an important part. *
The data i have is in the below format:
Table X
Seq_no A Claim Payment
1 abc 100 10
2 abc 100 50
3 abc 100 20
1 xyz 150 10
1 qwe 200 10
I need the results in the below format.
Table X
Seq_no A Claim Payment Balance (to be calculated)
1 abc 100 10 90 (100-10)
2 abc 50 40 (90-50)
3 abc 20 20 (40-20)
1 xyz 150 10 140 (150-10)
1 qwe 200 10 190 (200-10)
The Seq_no calculation has been done to make the claim column null for the cases of duplicate claims, which i had figured out already.
You cannot reference balance before create it.
You can use a "running sum" for what you want to achieve.
Notice that I partitioned by A because you want another balance for every A.
Select
Seq_no, a, Claim, Payment,
sum(nvl(claim,0) - payment) over (partition by A order by seq_no) as Balance
from X;
Result:
SEQ_NO A CLAIM PAYMENT BALANCE
1 abc 100 10 90
2 abc 50 40
3 abc 20 20
1 qwe 200 10 190
1 xyz 150 10 140
EDIT: With newer dataset you just need to replace nvl function with a case when seq=1:
Select
Seq_no,
a,
case when seq_no=1 then claim else 0 end as Claim,
Payment,
sum(case when seq_no=1 then claim else 0 end - payment)
over (partition by A order by seq_no) as Balance
from X;
In order to use balance as a column identifier, you must go one level deep, i.e. make your existing query as a sub-query.
Working demo:
SQL> WITH sample_data AS(
2 SELECT 1 Seq_no, 'abc' A, 100 Claim, 10 Payment FROM dual UNION ALL
3 SELECT 2 Seq_no, 'abc' A, NULL Claim, 50 FROM dual UNION ALL
4 SELECT 3 Seq_no, 'abc' A, NULL Claim, 20 FROM dual UNION ALL
5 SELECT 1 Seq_no, 'xyz' A, 150 Claim, 10 FROM dual UNION ALL
6 SELECT 1 Seq_no, 'qwe' A, 200 Claim, 10 FROM dual
7 )
8 -- end of sample_data mimicking real table
9 SELECT seq_no, A, claim, payment,
10 CASE
11 WHEN lag(balance) OVER(PARTITION BY A ORDER BY seq_no) IS NULL
12 THEN balance
13 ELSE lag(balance) OVER(PARTITION BY A ORDER BY seq_no) - payment
14 END balance
15 FROM
16 (SELECT seq_no, A, claim, payment,
17 CASE
18 WHEN seq_no = 1
19 THEN claim - payment
20 ELSE lag(claim - payment) OVER(PARTITION BY A ORDER BY seq_no) - payment
21 END balance
22 FROM sample_data
23 );
SEQ_NO A CLAIM PAYMENT BALANCE
---------- --- ---------- ---------- ----------
1 abc 100 10 90
2 abc 50 40
3 abc 20 20
1 qwe 200 10 190
1 xyz 150 10 140
SQL>
UPDATE : If you not-null values always for claim, then you could do a running diff:
SQL> WITH sample_data AS(
2 SELECT 1 Seq_no, 'abc' A, 100 Claim, 10 Payment FROM dual UNION ALL
3 SELECT 2 Seq_no, 'abc' A, 100 Claim, 50 FROM dual UNION ALL
4 SELECT 3 Seq_no, 'abc' A, 100 Claim, 20 FROM dual UNION ALL
5 SELECT 4 Seq_no, 'abc' A, 100 Claim, 10 FROM dual UNION ALL
6 SELECT 5 Seq_no, 'abc' A, 100 Claim, 10 FROM dual UNION ALL
7 SELECT 1 Seq_no, 'xyz' A, 150 Claim, 10 FROM dual UNION ALL
8 SELECT 1 Seq_no, 'qwe' A, 200 Claim, 10 FROM dual
9 )
10 -- end of sample_data mimicking real table
11 SELECT Seq_no,
12 a,
13 Claim,
14 Payment,
15 CASE
16 WHEN seq_no = 1
17 THEN claim - payment
18 ELSE SUM(claim - payment) over (partition BY A order by seq_no) - claim*(seq_no-1)
19 END balance
20 FROM sample_data;
SEQ_NO A CLAIM PAYMENT BALANCE
---------- --- ---------- ---------- ----------
1 abc 100 10 90
2 abc 100 50 40
3 abc 100 20 20
4 abc 100 10 10
5 abc 100 10 0
1 qwe 200 10 190
1 xyz 150 10 140
7 rows selected.
SQL>
You are correct that the error occurs because Balance is not a column in your table. You need to add that column.
The bigger problem is that unfortunately it's not possible to compute a running total using a single SQL statement the way you are trying to do it. SQL is not well suited to an operation like this. It can be done in a stored procedure using a cursor but the end result is procedural and not set-based, thus performance would be poor. Here's how to do it in SQL Server (as an example, not tested):
DECLARE PaymentCursor CURSOR FOR SELECT A, Claim, Payment FROM X ORDER BY A, Seq_no
OPEN PaymentCursor
DECLARE #A VARCHAR(16)
DECLARE #Claim MONEY
DECLARE #Payment MONEY
DECLARE #Balance MONEY
DECLARE #PreviousA VARCHAR(16)
SET #PreviousA = ''
FETCH NEXT FROM PaymentCursor INTO #A, #Claim, #Payment
WHILE ##FETCH_STATUS = 0
BEGIN
IF #A <> #PreviousA
BEGIN
SET #Balance = 0
SET #PreviousA = #A
END
SET #Balance = #Balance + #Claim - #Payment
UPDATE X SET Balance = #Balance WHERE CURRENT OF PaymentCursor
FETCH NEXT FROM PaymentCursor INTO #A, #Claim, #Payment
END
CLOSE PaymentCursor
DEALLOCATE PaymentCursor
If your application really needs to know the account balance associated with each transaction, the better approach is to compute the balance each time a row is inserted. Wrap the insertion logic in a stored procedure:
CREATE PROCEDURE InsertAndComputeBalance
#A VARCHAR(16),
#Claim MONEY,
#Payment MONEY
AS
DECLARE #MAX_Seq_no INTEGER = (SELECT MAX(Seq_no) FROM X WHERE A = #A)
INSERT INTO X
SELECT #MAX_Seq_no + 1, #A, #Claim, #Payment, Balance + #Claim - #Payment FROM X
WHERE A = #A AND Seq_no = #MAX_Seq_no
In theory you could call similar logic from a trigger so that it is automatically executed each time a row is inserted, but triggers can be evil so proceed very carefully with that approach.
Related
So I have a scenario where I need to order data on a column without including it in dense_rank(). Here is my sample data set:
This is the table:
create table temp
(
id integer,
prod_name varchar(max),
source_system integer,
source_date date,
col1 integer,
col2 integer);
This is the dataset:
insert into temp
(id,prod_name,source_system,source_date,col1,col2)
values
(1,'ABC',123,'01/01/2021',50,60),
(2,'ABC',123,'01/15/2021',50,60),
(3,'ABC',123,'01/30/2021',40,60),
(4,'ABC',123,'01/30/2021',40,70),
(5,'XYZ',456,'01/10/2021',80,30),
(6,'XYZ',456,'01/12/2021',75,30),
(7,'XYZ',456,'01/20/2021',75,30),
(8,'XYZ',456,'01/20/2021',99,30);
Now, I want to do dense_rank() on the data in such a way that for a combination of "prod_name and source_system", the rank gets incremented only if there is a change in col1 or col2 but the data should still be in ascending order of source_date.
Here is the expected result:
id
prod_name
source_system
source_date
col1
col2
Dense_Rank
1
ABC
123
01-01-21
50
60
1
2
ABC
123
15-01-21
50
60
1
3
ABC
123
30-01-21
40
60
2
4
ABC
123
30-01-21
40
70
3
5
XYZ
456
10-01-21
80
30
1
6
XYZ
456
12-01-21
75
30
2
7
XYZ
456
20-01-21
75
30
2
8
XYZ
456
20-01-21
99
30
3
As you can see above, the dates are changing but the expectation is that rank should only change if there is any change in either col1 or col2.
If I use this query
select id,prod_name,source_system,source_date,col1,col2,
dense_rank() over(partition by prod_name,source_system order by source_date,col1,col2) as rnk
from temp;
Then the result would come as:
id
prod_name
source_system
source_date
col1
col2
rnk
1
ABC
123
01-01-21
50
60
1
2
ABC
123
15-01-21
50
60
2
3
ABC
123
30-01-21
40
60
3
4
ABC
123
30-01-21
40
70
4
5
XYZ
456
10-01-21
80
30
1
6
XYZ
456
12-01-21
75
30
2
7
XYZ
456
20-01-21
75
30
3
8
XYZ
456
20-01-21
99
30
4
And, if I exclude source_date from order by in rank function i.e.
select id,prod_name,source_system,source_date,col1,col2,
dense_rank() over(partition by prod_name,source_system order by col1,col2) as rnk
from temp;
Then my result is coming as:
id
prod_name
source_system
source_date
col1
col2
rnk
3
ABC
123
30-01-21
40
60
1
4
ABC
123
30-01-21
40
70
2
1
ABC
123
01-01-21
50
60
3
2
ABC
123
15-01-21
50
60
3
6
XYZ
456
12-01-21
75
30
1
7
XYZ
456
20-01-21
75
30
1
5
XYZ
456
10-01-21
80
30
2
8
XYZ
456
20-01-21
99
30
3
Both the results are incorrect. How can I get the expected result? Any guidance would be helpful.
WITH cte AS (
SELECT *,
LAG(col1) OVER (PARTITION BY prod_name, source_system ORDER BY source_date, id) lag1,
LAG(col2) OVER (PARTITION BY prod_name, source_system ORDER BY source_date, id) lag2
FROM temp
)
SELECT *,
SUM(CASE WHEN (col1, col2) = (lag1, lag2)
THEN 0
ELSE 1
END) OVER (PARTITION BY prod_name, source_system ORDER BY source_date, id) AS `Dense_Rank`
FROM cte
ORDER BY id;
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=ac70104c7c5dfb49c75a8635c25716e6
When comparing multiple columns, I like to look at the previous values of the ordering column, rather than the individual columns. This makes it much simpler to add more and more columns.
The basic idea is to do a cumulative sum of changes for each prod/source system. In Redshift, I would phrase this as:
select t.*,
sum(case when prev_date = prev_date_2 then 0 else 1 end) over (
partition by prod_name, source_system
order by source_date
rows between unbounded preceding and current row
)
from (select t.*,
lag(source_date) over (partition by prod_name, source_system order by source_date, id) as prev_date,
lag(source_date) over (partition by prod_name, source_system, col1, col2 order by source_date, id) as prev_date_2
from temp t
) t
order by id;
I think I have the syntax right for Redshift. Here is a db<>fiddle using Postgres.
Note that ties on the date can cause a problem -- regardless of the solution. This uses the id to break the ties. Perhaps id can just be used in general, but your code is using the date, so this uses the date with the id.
I have a temp table like this:
id d tax_rate money
1 20210101 5 100
1 20210201 15 0
1 20210301 20 0
1 20210401 5 0
This is the output I want to select:
id d tax_rate money total
1 20210101 5 100 105
1 20210201 15 105 120.75
1 20210301 20 120.75 144.9
1 20210401 5 144.9 152.145
This means that I need to recursively calculate the total based on tax_rate and previous total (in first day previous total = money).
total = previous total (by date) * (1 + tax_rate) (tax_rate in percentage)
I tried using LAG() OVER() but LAG only calculate previous, not recursively so from 3rd day the calculated return wrong total.
In my case, if I can use LAG or any function to multiple all the previous tax_rate (e.g 1.05 * 1.15 * 1.2 = 1.449) then I can calculate the right previous total, but no luck to find a function to do that.
WITH tmp AS
(
SELECT 1 AS id, 20210101 AS d, 5 AS tax_rate, 1000 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210201 AS d, 15 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210301 AS d, 20 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210401 AS d, 5 AS tax_rate, 0 AS money FROM dual
)
SELECT *
FROM tmp;
You can try to use mathematical formulas to do accumulate for multiplication.
Then calculate money by the accumulate for multiplication.
Query 1:
SELECT ID, D, tax_rate,
SUM(money) OVER(PARTITION BY ID ORDER BY ID) * EXP(SUM(LN(CAST(tax_rate AS DECIMAL(5,2))/100 + 1))over(PARTITION BY ID ORDER BY d)) total
FROM tmp
Results:
| ID | D | TAX_RATE | TOTAL |
|----|----------|----------|---------|
| 1 | 20210101 | 5 | 105 |
| 1 | 20210201 | 15 | 120.75 |
| 1 | 20210301 | 20 | 144.9 |
| 1 | 20210401 | 5 | 152.145 |
One option would be something like this
WITH tmp AS
(
SELECT 1 AS id, 20210101 AS d, 5 AS tax_rate, 100 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210201 AS d, 15 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210301 AS d, 20 AS tax_rate, 0 AS money FROM dual UNION ALL
SELECT 1 AS id, 20210401 AS d, 5 AS tax_rate, 0 AS money FROM dual
),
running_total( id, d, tax_rate, money, total )
as (
select id, d, tax_rate, money, money * (1 + tax_rate/100) total
from tmp
where money != 0
union all
select t.id, t.d, t.tax_rate, t.money, rt.total * (1 + t.tax_rate/100)
from tmp t
join running_total rt
on t.id = rt.id
and to_date( rt.d, 'yyyyddmm' ) = to_date( t.d, 'yyyyddmm' ) - 1
)
select *
from running_total;
See this dbfiddle.
I am assuming that the first row, which forms the base of the recursive CTE, is the row where money != 0 (so there would be only one such row per id). You could change that to pick the row with the earliest date per id or whatever other "first row" logic your actual data supports.
Note that life will be easier for you if you use actual dates for dates rather than using numbers that represent dates. For a 4 row virtual table, it won't matter much that you have to do a to_date on both sides of the join in the running_total recursive CTE. But for a real table with a decent number of rows, you'd want to be able to have an index on (id, d) to get decent performance. You could, of course, create a function-based index but then you'd either need to explicitly specify things like the NLS environment in your to_date call or deal with the potential for sessions not to use your index if their NLS environment doesn't match the NLS settings used to create the index.
I am trying to fetch data for one of my clients, but there are missing tokens in his data. I tried the below query for fetching the missing data and inserting into one dump table, but would like to find a more optimized way where I find the missing tokens along with the token's present in the table.
SELECT MinToken + 1 Level
FROM (SELECT Min(Token) AS MINTOKEN, MAX(Token) AS MAXTOKEN
FROM dmp_SellerOrders
Where OrderNo = 1
AND TradeDate = '27-Oct-20')
CONNECT BY LEVEL < MAXTOKEN - MINTOKEN
MINUS
SELECT TOKEN
FROM dmp_SellerOrders
Where (OrderNo, TranserialNo) IN (SELECT OrderNo, MAX(TranserialNo)
FROM dmp_SellerOrders
GROUP BY OrderNo)
AND TradeDate = '27-Oct-20';
Actual Data in Table looks like this,
Original Order OrderNo TranserialNo Token Qty Price
1 1 25 100 100
1 1 26 100 100
1 1 27 100 100
1 1 28 100 100
1 1 30 100 100
1 1 31 100 100
Order Price Modified OrderNo TranserialNo Token Qty Price
1 2 25 100 200
1 2 26 100 200
1 2 27 100 200
1 2 28 100 200
1 2 30 100 200
1 2 31 100 200
I need data to show by grouping Qty, as show below,
OrderNo MinToken MaxToken Qty Price
1 25 28 100 200
1 29 29 0 0
1 30 31 100 200
But if I group qty then I get below out,
OrderNo MinToken MaxToken Qty Price
1 25 31 100 200
1 29 29 0 0
Can any one please help me, how can I get the output as expected.
Regards,
Mehul
Here a simplified solution without orderNo and TradeDate to demonstrate the concept.
You perform following steps in subsequent subqueries
get all tokens and outer join to the order table to get a complete sequence
find the LAG of the qty column
set grp_id to 1 if there is a break. i.e. if the previos qty is null and the current not null or vice versa - else to 0
cummulate the grp_id using analytic form of SUM
here the result with the sample data
TOKEN QTY PRICE QTY_LAG GRP_ID CUM_GRP_ID
---------- ---------- ---------- ---------- ---------- ----------
25 100 100 1 1
26 100 100 100 0 1
27 100 100 100 0 1
28 100 100 100 0 1
29 100 1 2
30 100 100 1 3
The last step is a simple GROUP BY on qty, price with added cummulated group id cum_grp_id which splits the non adjacent groups.
Query
with orders as (
select 25 token, 100 qty, 100 price from dual union all
select 26 token, 100 qty, 100 price from dual union all
select 27 token, 100 qty, 100 price from dual union all
select 28 token, 100 qty, 100 price from dual union all
select 30 token, 100 qty, 100 price from dual union all
select 31 token, 100 qty, 100 price from dual),
tokens as (
select min(token) min_token, max(token) max_token from orders),
all_tokens as (
select min_token - 1 + level token from tokens
connect by level <= max_token - min_token),
grp as (
select
t.token,
o.qty, o.price,
lag(o.qty) over (order by t.token) as qty_lag
from all_tokens t
left outer join orders o
on t.token = o.token),
grp2 as (
select
token, qty, price, qty_lag,
case when qty is null and qty_lag is null or qty is not null and qty_lag is not null then 0 else 1 end as grp_id
from grp),
grp3 as (
select
token, qty, price, qty_lag, grp_id,
sum(grp_id) over (order by token) cum_grp_id
from grp2)
select min(token), max(token), qty, price
from grp3
group by cum_grp_id, qty, price
order by 1
result
MIN(TOKEN) MAX(TOKEN) QTY PRICE
---------- ---------- ---------- ----------
25 28 100 100
29 29
30 30 100 100
Adapt if required by partitioning the analytic function by orderNo and/or tradeDate.
Currently I am working on fixed asset register report wand and I need to add Projection columns in main query.
Depreciation Remaining Life of
Assed_No Amount asset in months
-------- ------------ -----------------
1 400 6
2 200 3
3 100 4
4 600 1
Now, I want modification in SQL that according remaining life of asset in months column should generate. For 1st asset remaining life of asset in months is 6, so total 6 projection columns should come with value as 400. If remaining life is less than maximum no. i.e 6 then it should give 0 for rest of columns.
I want final solution like below,
Depreciation Remaining Life of
Assed_No Amount asset in months Projection 1 Projection 2 Projection 3 Projection 4 Projection 5 Projection 6
-------- ------------ ----------------- ------------ ------------ ------------ ------------ ------------ ------------
1 400 6 400 400 400 400 400 400
2 200 3 200 200 0 0 0 0
3 100 4 100 100 100 100 0 0
4 600 1 600 0 0 0 0 0
You can use a simple case expression for each projection:
-- CTE for sample data
with your_table (asset_no, amount, remaining_months) as (
select 1, 400, 6 from dual
union all select 2, 200, 3 from dual
union all select 3, 100, 4 from dual
union all select 4, 600, 1 from dual
)
-- query using my CTE column names
select asset_no, amount, remaining_months,
case when remaining_months >= 1 then amount else 0 end as proj_1,
case when remaining_months >= 2 then amount else 0 end as proj_2,
case when remaining_months >= 3 then amount else 0 end as proj_3,
case when remaining_months >= 4 then amount else 0 end as proj_4,
case when remaining_months >= 5 then amount else 0 end as proj_5,
case when remaining_months >= 6 then amount else 0 end as proj_6
from your_table;
ASSET_NO AMOUNT REMAINING_MONTHS PROJ_1 PROJ_2 PROJ_3 PROJ_4 PROJ_5 PROJ_6
---------- ---------- ---------------- ---------- ---------- ---------- ---------- ---------- ----------
1 400 6 400 400 400 400 400 400
2 200 3 200 200 200 0 0 0
3 100 4 100 100 100 100 0 0
4 600 1 600 0 0 0 0 0
This is basically the dynamic version of Alex's query ( as a bonus !)
Note the use of refcursor Bind variables. This will work when you run it in SQl* Plus or as script (F5) in SQL developer or Toad.
You may also use DBMS_SQL.RETURN_RESULT in Oracle 12c and above to do the same.
VARIABLE x REFCURSOR;
DECLARE
v_case_expr VARCHAR2(1000);
BEGIN
SELECT
listagg('CASE WHEN remaining_months > = '
|| level
|| '
then amount else 0 end as proj_'
|| level,',') WITHIN GROUP ( ORDER BY level)
INTO v_case_expr
FROM
dual
CONNECT BY
level <= (
SELECT
MAX(remaining_months)
FROM
assets
);
OPEN :x FOR 'select asset_no, amount, remaining_months, '
|| v_case_expr
|| ' FROM assets';END;
/
PRINT x;
PL/SQL procedure successfully completed.
ASSET_NO AMOUNT REMAINING_MONTHS PROJ_1 PROJ_2 PROJ_3 PROJ_4 PROJ_5 PROJ_6
---------- ---------- ---------------- ---------- ---------- ---------- ---------- ---------- ----------
1 400 6 400 400 400 400 400 400
2 200 3 200 200 200 0 0 0
3 100 4 100 100 100 100 0 0
4 600 1 600 0 0 0 0 0
When the amount of calculations increase sometimes pivot could be better.
Here I use a cartesian product before, to create the data:
with dat (asset_no, dep_amount, r_life)as
( select 1, 400, 6 from dual union all
select 2, 200, 3 from dual union all
select 3, 100, 4 from dual union all
select 4, 600, 1 from dual )
, mon as (select level lv from dual connect by level <= 6)
, bas as (
select asset_no, dep_amount, r_life, lv, case when r_life >= lv then dep_amount else 0 end proj_m from dat, mon)
select *
from bas
pivot(sum(proj_m) for lv in (1 as proj_1,2 as proj_2,3 as proj_3,4 as proj_4,5 as proj_5,6 as proj_6))
order by asset_no
Think the solution with case expressions is better here
I need to find how many records it took to reach a given value. I have a table in the below format:
ID Name Time Time 2
1 Campaign 1 7 100
2 Campaign 3 5 165
3 Campaign 1 3 321
4 Campaign 2 610 952
5 Campaign 2 15 13
6 Campaign 2 310 5
7 Campaign 3 0 3
8 Campaign 1 0 610
9 Campaign 1 1 15
10 Campaign 1 54 310
11 Campaign 3 4 0
12 Campaign 2 23 0
13 Campaign 2 8 1
14 Campaign 3 23 1
15 Campaign 3 7 0
16 Campaign 3 5 5
17 Campaign 3 2 66
18 Campaign 3 100 7
19 Campaign 1 165 3
20 Campaign 1 321 13
21 Campaign 1 952 5
22 Campaign 1 13 3
23 Campaign 2 15 610
24 Campaign 2 0 15
25 Campaign 1 100 310
26 Campaign 2 165 0
27 Campaign 3 321 0
28 Campaign 3 952 1
29 Campaign 3 0 1
30 Campaign 3 5 0
I'd like to find out how many entries of 'Campaign 1' there were before the total of Time1 + Time2 was equal to or greater than a given number.
As an example, the result for Campaign 1 to reach 1400 should be 5.
Apologies if I haven't explained this clearly enough - the concept is still a little muddy at the moment.
Thanks
In SQL Server 2012, you can get the row using:
select t.*
from (select t.*, sum(time1 + time2) over (partition by name order by id) as cumsum
from table t
) t
where cumsum >= #VALUE and (cumsum - (time1 + time2)) < #VALUE;
You can get the count using:
select name, count(*)
from (select t.*, sum(time1 + time2) over (partition by name order by id) as cumsum
from table t
) t
where (cumsum - (time1 + time2)) < #VALUE
group by name;
If you are not using SQL Server 2012, you can do the cumulative sum with a correlated subquery:
select name, count(*)
from (select t.*,
(select sum(time1 + time2)
from table t2
where t2.name = t.name and
t2.id <= t.id
) as cumsum
from table t
) t
where (cumsum - (time1 + time2)) < #VALUE
group by name;
A recursive CTE computing a running total should work:
;WITH CTE AS
(
SELECT id,
name,
SUM([time]+[time 2])
OVER (ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
AS RunningTotal
FROM Table1
WHERE name = 'Campaign 1'
)
SELECT count(*)+1 AS [Count]
FROM CTE
WHERE RunningTotal < 1400
Note that I added 1 to the count as the query counts the number of rows needed to reach up to, but not including, 1400. Logic dictates that the next row will push the value above 1400.