Date Range for set of same data - sql

I am trying to build a SQL query which will give me the date range for the dates with same prices. If there is a break in the prices, I expect to see it in a new line. Even if sometime during the month there are same prices, if there is change in the prices sometime in between I want to see it as two separate rows with the specific date range.
Sample Data:
Date Price
1-Jan 3.2
2-Jan 3.2
3-Jan 3.2
4-Jan 3.2
5-Jan 3.2
6-Jan 3.2
7-Jan 3.2
8-Jan 3.2
9-Jan 3.5
10-Jan 3.5
11-Jan 3.5
12-Jan 3.5
13-Jan 3.5
14-Jan 4.2
15-Jan 4.2
16-Jan 4.2
17-Jan 3.2
18-Jan 3.2
19-Jan 3.2
20-Jan 3.2
21-Jan 3.2
22-Jan 3
23-Jan 3
24-Jan 3
25-Jan 3
26-Jan 3
27-Jan 3
28-Jan 3
29-Jan 3.5
30-Jan 3.5
31-Jan 3.5
Desired Result :
Price Date Range
3.2 1-8
3.5 9-13
4.2 14-16
3.2 17-22
3 22-28
3.5 29-31

Non-relational Solution
I don't think any of other answers are correct.
GROUP BY won't work
Using ROW_NUMBER() forces the data into a Record Filing System structure, which is physical, and then processes it as physical records. At a massive performance cost. Of course, in order to write such code, it forces you to think in terms of RFS instead of thinking in Relational terms.
Using CTEs is the same. Iterating through the data, especially data that does not change. At a slightly different massive cost.
Cursors are definitely the wrong thing for a different set of reasons. (a) Cursors require code, and you have requested a View (b) Cursors abandon the set-processing engine, and revert to row-by-row processing. Again, not required. If a developer on any of my teams uses cursors or temp tables on a Relational Database (ie. not a Record Filing System), I shoot them.
Relational Solution
Your data is Relational, logical, the two given data columns are all that is necessary.
Sure, we have to form a View (derived Relation), to obtain the desired report, but that consists of pure SELECTs, which is quite different to processing (converting it to a file, which is physical, and then processing the file; or temp tables; or worktables; or CTEs; or ROW_Number(); etc).
Contrary to the lamentations of "theoreticians", who have an agenda, SQL handles Relational data perfectly well. And you data is Relational.
Therefore, maintain a Relational mindset, a Relational view of the data, and a set-processing mentality. Every report requirement over a Relational Database can be fulfilled using a single SELECT. There is no need to regress to pre-1970 ISAM File handling methods.
I will assume the Primary Key (the set of columns that give a Relational row uniqueness) is Date, and based on the example data given, the Datatype is DATE.
Try this:
CREATE VIEW MyTable_Base_V -- Foundation View
AS
SELECT Date,
Date_Next,
Price
FROM (
-- Derived Table: project rows with what we need
SELECT Date,
[Date_Next] = DATEADD( DD, 1, O.Date ),
Price,
[Price_Next] = (
SELECT Price -- NULL if not exists
FROM MyTable
WHERE Date = DATEADD( DD, 1, O.Date )
)
FROM MyTable MT
) AS X
WHERE Price != Price_Next -- exclude unchanging rows
GO
CREATE VIEW MyTable_V -- Requested View
AS
SELECT [Date_From] = (
-- Date of the previous row
SELECT MAX( Date_Next ) -- previous row
FROM MyTable_V
WHERE Date_Next < MT.Date
),
[Date_To] = Date, -- this row
Price
FROM MyTable_Base_V MT
GO
SELECT *
FROM MyTable_V
GO
Method, Generic
Of course this is a method, therefore it is generic, it can be used to determine the From_ and To_ of any data range (here, a Date range), based on any data change (here, a change in Price).
Here, your Dates are consecutive, so the determination of Date_Next is simple: increment the Date by 1 day. If the PK is increasing but not consecutive (eg. DateTime or TimeStamp or some other Key), change the Derived Table X to:
-- Derived Table: project rows with what we need
SELECT DateTime,
[DateTime_Next] = (
-- first row > this row
SELECT TOP 1
DateTime -- NULL if not exists
FROM MyTable
WHERE DateTime > MT.DateTime
),
Price,
[Price_Next] = (
-- first row > this row
SELECT TOP 1
Price -- NULL if not exists
FROM MyTable
WHERE DateTime > MT.DateTime
)
FROM MyTable MT
Enjoy.
Please feel free to comment, ask questions, etc.

You can do this by adding a grouping column. A neat trick for this is the difference of two sequences of numbers -- when the difference is constant, then the price is the same.
select price, min(date), max(date)
from (select s.*,
(row_number() over (order by date) -
row_number() over (partition by price order by date)
) as grp
from sample s
) grp
group by grp, price;
Note: be careful that price is stored as a fixed decimal rather than a floating decimal. Otherwise, values that look the same might not actually be the same.

This is what you are looking for
declare #temptbl table (price decimal(18,2), mindate date, maxdate date)
declare #price as decimal(18,2), #date as date
declare tempcur cursor for
select price, date
from YourTable
open tempcur
fetch next from tempcur
into #price, #date
while (##fetch_status = 0)
begin
if (isnull((select price from #temptbl where maxdate = (select max(maxdate)from #temptbl)),0) <> #price)
insert into #temptbl (price,mindate,maxdate) values (#price,#date,#date)
else
update #temptbl
set maxdate = #date
where maxdate = (select max(maxdate)from #temptbl)
fetch next from tempcur
into #price, #date
end
deallocate tempcur
select price, convert(nvarchar(50), mindate) + ' to ' + convert(nvarchar(50), maxdate) as [date range] from #temptbl

Use CTE, below is working code.
WITH grouped AS (
SELECT
Pricedate, price,
grp1= ROW_NUMBER() OVER (ORDER BY Pricedate) -
ROW_NUMBER() OVER (Partition by price ORDER BY Pricedate)
FROM yourTablewithDateAndPrice
)
SELECT
DtFrom = MIN(Pricedate),
DtTo = MAX(Pricedate),
Price = price
FROM grouped
GROUP BY Price,grp1
order by DtFrom;
The internal query will created same group till the time it find same price, else group will be incremented by one.
in Final group by you will have required result.

Related

IBM DB2: Generate list of dates between two dates

I need a query which will output a list of dates between two given dates.
For example, if my start date is 23/02/2016 and end date is 02/03/2016, I am expecting the following output:
Date
----
23/02/2016
24/02/2016
25/02/2016
26/02/2016
27/02/2016
28/02/2016
29/02/2016
01/03/2016
02/03/2016
Also, I need the above using SQL only (without the use of 'WITH' statement or tables). Please help.
I am using ,ostly DB2 for iSeries, so I will give you an SQL only solution that works on it. Currently I don't have an access to the server, so the query is not tested but it should work. EDIT Query is already tested and working
SELECT
d.min + num.n DAYS
FROM
-- create inline table with min max date
(VALUES(DATE('2015-02-28'), DATE('2016-03-01'))) AS d(min, max)
INNER JOIN
-- create inline table with numbers from 0 to 999
(
SELECT
n1.n + n10.n + n100.n AS n
FROM
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n)
CROSS JOIN
(VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n)
CROSS JOIN
(VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n)
) AS num
ON
d.min + num.n DAYS<= d.max
ORDER BY
num.n;
if you don't want to execute the query only once, you should consider creating a real table with values for the loop:
CREATE TABLE dummy_loop AS (
SELECT
n1.n + n10.n + n100.n AS n
FROM
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n)
CROSS JOIN
(VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n)
CROSS JOIN
(VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n)
) WITH DATA;
ALTER TABLE dummy_loop ADD PRIMARY KEY (dummy_loop.n);
It depends on the reason for which you like to use it, but you could even create table for lets say for 100 years. It will be only 100*365 = 36500 rows with just a date field, so the table will be quite small and fast for joins.
CREATE TABLE dummy_dates AS (
SELECT
DATE('1970-01-01') + (n1.n + n10.n + n100.n) DAYS AS date
FROM
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n)
CROSS JOIN
(VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n)
CROSS JOIN
(VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n)
) WITH DATA;
ALTER TABLE dummy_dates ADD PRIMARY KEY (dummy_dates.date);
And the select query could look like:
SELECT
*
FROM
dummy_days
WHERE
date BETWEEN(:startDate, :endDate);
EDIT 2: Thanks to #Lennart suggestion I have changed TABLE(VALUES(..,..,..)) to VALES(..,..,..) because as he said TABLE is a synonym to LATERAL that was a real surprise for me.
EDIT 3: Thanks to #godric7gt I have removed TIMESTAMPDIFF and will remove from all my scripts, because as it is said in the documentation:
These assumptions are used when converting the information in the second argument, which is a timestamp duration, to the interval type specified in the first argument. The returned estimate may vary by a number of days. For example, if the number of days (interval 16) is requested for the difference between '1997-03-01-00.00.00' and '1997-02-01-00.00.00', the result is 30. This is because the difference between the timestamps is 1 month, and the assumption of 30 days in a month applies.
It was a real surprise, because I was always trust this function for days difference.
For generating rows recusive SQL will needed.
Usually this looks like this in DB2:
with temp (date) as (
select date('23.02.2016') as date from sysibm.sysdummy1
union all
select date + 1 day from temp
where date < date('02.03.2016')
)
select * from temp
For whatever reason a CTE (using WITH) should be avoided.
A possible workaround would be setting
db2set DB2_COMPATIBILITY_VECTOR=8
which enables the use of the Oracle style recusion with CONNECT BY
SELECT date('22.02.2016') + level days as dt
FROM sysibm.sysdummy1 CONNECT BY date('22.02.2016') + level days <= date('02.03.2016')
Please note: after setting the DB2_COMPATIBILITY_VECTOR a instance restart is necessary.
This solution doesn't use WITH, but it does use WHILE and a temp table...hopefully that meets your needs still?
EDIT -- I built this in SSMS 2014
DECLARE #Start DATE
DECLARE #End DATE
SET #Start = '2016-02-23'
SET #End = '2016-03-02'
CREATE TABLE #Dates ([Date] DATE)
WHILE #Start <= #End
BEGIN
INSERT INTO #Dates
SELECT #Start
SET #Start = DATEADD(Day,1,#Start)
END
SELECT * FROM #Dates
DROP TABLE #Dates
I assume AS400 does not support recursive CTE's, and that's why you want a solution without them. I have no clue whether it supports any of the following constructions, but it might be worth a shot. First we will need a generator, any table with a sufficient number of rows will do. If you don't have a table large enough for the number of days you want you can create a cartesian product. Example:
select row_number() over ()
from a_table
cross join a_table
Another way of extending the domain is to create the powerset of a table using group by cube, see below.
Assume we one way or another can create a large enough set of rows. You can generate the dates like:
select date('23/02/2016') + n days
from (
select row_number() over () as n
from a_table
) as t
where n < 100
order by n
If for some reason you don't want to use an existing table, group by cube will produce a relation with a cardinality equal to the power set of the attributes. Here I use 4 columns which will generate 16 rows.
select date('2016-01-01') + row_number() over () days
from sysibm.dual x
group by cube(x.dummy, x.dummy, x.dummy, x.dummy)
If you want to generate say 100 rows you need 7 (since 2^7=128) attributes in the group by cube clause and a fetch first 100 rows:
select date('2016-01-01') + row_number() over () days
from sysibm.dual x
group by cube(x.dummy, x.dummy, x.dummy, x.dummy, x.dummy, x.dummy, x.dummy)
order by 1
fetch first 100 rows only

How to query date in oracle?

Assuming I have the following table in oracle:
id|orderdatetime (date type)|foodtype (string type)
1|2013-12-02T00:26:00 | burger
2|2013-12-02T00:20:00 | fries
...
(assume there are many dates and times)
Assuming someone happened to have a date in mind (i.e. "2010-12-02T00:25:00").
even though there is no database entry with that specific time in there...
is there some way to query the database such that I can get the row that has a date time that is closest to it without being ahead of the date in mind (ideally, it would be less than or equal to)?
(i.e. in this case, the sql query would return the row for "fries" and not "burger" because the time for burger is past the time the user had in mind despite the fact that the time for "burger" is closer.)
select x.* from (select id,orderdatetime,foods from orders
where orderdatetime <= YOURTIME order by orderdatetime desc)x
where rownum =1
Another would be:
select * from orders where orderdatetime = (select max(orderdatetime) from orders
where orderdatetime <= YOURTIME)

Calculate Average after populating a temp table

I have been tasked with figuring out the average length of time that our customers stick with us. (Specifically from the date they become a customer, to when they placed their last order.)
I am not 100% sure that I am doing this properly, but my thought was to gather the date we enter the customer into the database, and then head over to the order table and grab their most recent order date, dump them into a temp table, and then figure out the length of time between those two dates, and then tally an average based on that number.
( I have to do some other wibbly wobbly time stuff as well, but this is the one thats kicking my butt)
The end goal with this is to be able to say "On Average our customers stick with us for 4 years, and 3 months." (Or whatever the data shows it to be.)
SELECT * INTO #AvgTable
FROM(
SELECT DISTINCT (c.CustNumber) AS [CustomerNumber]
, COALESCE(convert( VARCHAR(10),c.OrgEnrollDate,101),'') AS [StartDate]
, COALESCE(CONVERT(VARCHAR(10),MAX(co.OrderDate),101),'')AS [EndDate]
,DATEDIFF(DD,c.OrgEnrollDate, co.OrderDate) as [LengthOfTime]
FROM dbo.Customer c
JOIN dbo.CustomerOrder co ON c.ID = co.CustomerID
WHERE c.Archived = 0
AND co.Archived =0
AND c.OrgEnrollDate IS NOT NULL
AND co.OrderDate IS NOT NULL
GROUP BY c.CustNumber
, co.OrderDate 2
)
--This is where I start falling apart
Select AVG[LengthofTime]
From #AvgTable
If understand you correctly, then just try
SELECT AVG(DATEDIFF(dd, StartDate, EndDate)) AvgTime
FROM #AvgTable
My guess is that since you are storing the data in a temp table, that the integer result of the datediff is being implicitly converted back to a datetime (which you cannot do an average on).
Don't store the average in your temp table (don't even have a temp table, but that is whole different conversation). Just do the differencing in your select.

Find closest date in SQL Server

I have a table dbo.X with DateTime column Y which may have hundreds of records.
My Stored Procedure has parameter #CurrentDate, I want to find out the date in the column Y in above table dbo.X which is less than and closest to #CurrentDate.
How to find it?
The where clause will match all rows with date less than #CurrentDate and, since they are ordered descendantly, the TOP 1 will be the closest date to the current date.
SELECT TOP 1 *
FROM x
WHERE x.date < #CurrentDate
ORDER BY x.date DESC
Use DateDiff and order your result by how many days or seconds are between that date and what the Input was
Something like this
select top 1 rowId, dateCol, datediff(second, #CurrentDate, dateCol) as SecondsBetweenDates
from myTable
where dateCol < #currentDate
order by datediff(second, #CurrentDate, dateCol)
I have a better solution for this problem i think.
I will show a few images to support and explain the final solution.
Background
In my solution I have a table of FX Rates. These represent market rates for different currencies. However, our service provider has had a problem with the rate feed and as such some rates have zero values. I want to fill the missing data with rates for that same currency that as closest in time to the missing rate. Basically I want to get the RateId for the nearest non zero rate which I will then substitute. (This is not shown here in my example.)
1) So to start off lets identify the missing rates information:
Query showing my missing rates i.e. have a rate value of zero
2) Next lets identify rates that are not missing.
Query showing rates that are not missing
3) This query is where the magic happens. I have made an assumption here which can be removed but was added to improve the efficiency/performance of the query. The assumption on line 26 is that I expect to find a substitute transaction on the same day as that of the missing / zero transaction.
The magic happens is line 23: The Row_Number function adds an auto number starting at 1 for the shortest time difference between the missing and non missing transaction. The next closest transaction has a rownum of 2 etc.
Please note that in line 25 I must join the currencies so that I do not mismatch the currency types. That is I don't want to substitute a AUD currency with CHF values. I want the closest matching currencies.
Combining the two data sets with a row_number to identify nearest transaction
4) Finally, lets get data where the RowNum is 1
The final query
The query full query is as follows;
; with cte_zero_rates as
(
Select *
from fxrates
where (spot_exp = 0 or spot_exp = 0)
),
cte_non_zero_rates as
(
Select *
from fxrates
where (spot_exp > 0 and spot_exp > 0)
)
,cte_Nearest_Transaction as
(
select z.FXRatesID as Zero_FXRatesID
,z.importDate as Zero_importDate
,z.currency as Zero_Currency
,nz.currency as NonZero_Currency
,nz.FXRatesID as NonZero_FXRatesID
,nz.spot_imp
,nz.importDate as NonZero_importDate
,DATEDIFF(ss, z.importDate, nz.importDate) as TimeDifferece
,ROW_NUMBER() Over(partition by z.FXRatesID order by abs(DATEDIFF(ss, z.importDate, nz.importDate)) asc) as RowNum
from cte_zero_rates z
left join cte_non_zero_rates nz on nz.currency = z.currency
and cast(nz.importDate as date) = cast(z.importDate as date)
--order by z.currency desc, z.importDate desc
)
select n.Zero_FXRatesID
,n.Zero_Currency
,n.Zero_importDate
,n.NonZero_importDate
,DATEDIFF(s, n.NonZero_importDate,n.Zero_importDate) as Delay_In_Seconds
,n.NonZero_Currency
,n.NonZero_FXRatesID
from cte_Nearest_Transaction n
where n.RowNum = 1
and n.NonZero_FXRatesID is not null
order by n.Zero_Currency, n.NonZero_importDate

How do I calculate a running total in SQL without using a cursor?

I'm leaving out all the cursor setup and the SELECT from the temp table for brevity. Basically, this code computes a running balance for all transactions per transaction.
WHILE ##fetch_status = 0
BEGIN
set #balance = #balance+#amount
insert into #tblArTran values ( --from artran table
#artranid, #trandate, #type,
#checkNumber, #refNumber,#custid,
#amount, #taxAmount, #balance, #postedflag, #modifieddate )
FETCH NEXT FROM artranCursor into
#artranid, #trandate, #type, #checkNumber, #refNumber,
#amount, #taxAmount,#postedFlag,#custid, #modifieddate
END
Inspired by this code from an answer to another question,
SELECT #nvcConcatenated = #nvcConcatenated + C.CompanyName + ', '
FROM tblCompany C
WHERE C.CompanyID IN (1,2,3)
I was wondering if SQL had the ability to sum numbers in the same way it's concatonating strings, if you get my meaning. That is, to create a "running balance" per row, without using a cursor.
Is it possible?
You might want to take a look at the update to local variable solution here: http://geekswithblogs.net/Rhames/archive/2008/10/28/calculating-running-totals-in-sql-server-2005---the-optimal.aspx
DECLARE #SalesTbl TABLE (DayCount smallint, Sales money, RunningTotal money)
DECLARE #RunningTotal money
SET #RunningTotal = 0
INSERT INTO #SalesTbl
SELECT DayCount, Sales, null
FROM Sales
ORDER BY DayCount
UPDATE #SalesTbl
SET #RunningTotal = RunningTotal = #RunningTotal + Sales
FROM #SalesTbl
SELECT * FROM #SalesTbl
Outperforms all other methods, but has some doubts about guaranteed row order. Seems to work fine when temp table is indexed though..
Nested sub-query 9300 ms
Self join 6100 ms
Cursor 400 ms
Update to local variable 140 ms
SQL can create running totals without using cursors, but it's one of the few cases where a cursor is actually more performant than a set-based solution (given the operators currently available in SQL Server). Alternatively, a CLR function can sometimes shine well. Itzik Ben-Gan did an excellent series in SQL Server Magazine on running aggregates. The series concluded last month, but you can get access to all of the articles if you have an online subscription.
Edit: here's his latest article in the series (SQL CLR).
Given that you can access the whole series by purchasing an online monthly pass for one month - less than 6 bucks - it's worth your while if you're interested in looking at the problem from all angles. Itzik is a Microsoft MVP and a very bright TSQL coder.
In Oracle and PostgreSQL 8.4 you can use window functions:
SELECT SUM(value) OVER (ORDER BY id)
FROM mytable
In MySQL, you can use a session variable for the same purpose:
SELECT #sum := #sum + value
FROM (
SELECT #sum := 0
) vars, mytable
ORDER BY
id
In SQL Server, it's a rare example of a task for which a cursor is a preferred solution.
An example of calculating a running total for each record, but only if the OrderDate for the records are on the same date. Once the OrderDate is for a different day, then a new running total will be started and accumulated for the new day: (assume the table structure and data)
select O.OrderId,
convert(char(10),O.OrderDate,101) as 'Order Date',
O.OrderAmt,
(select sum(OrderAmt) from Orders
where OrderID <= O.OrderID and
convert(char(10),OrderDate,101)
= convert(char(10),O.OrderDate,101))
'Running Total'
from Orders O
order by OrderID
Here are the results returned from the query using sample Orders Table:
OrderId Order Date OrderAmt Running Total
----------- ---------- ---------- ---------------
1 10/11/2003 10.50 10.50
2 10/11/2003 11.50 22.00
3 10/11/2003 1.25 23.25
4 10/12/2003 100.57 100.57
5 10/12/2003 19.99 120.56
6 10/13/2003 47.14 47.14
7 10/13/2003 10.08 57.22
8 10/13/2003 7.50 64.72
9 10/13/2003 9.50 74.22
Note that the "Running Total" starts out with a value of 10.50, and then becomes 22.00, and finally becomes 23.25 for OrderID 3, since all these records have the same OrderDate (10/11/2003). But when OrderID 4 is displayed the running total is reset, and the running total starts over again. This is because OrderID 4 has a different date for its OrderDate, then OrderID 1, 2, and 3. Calculating this running total for each unique date is once again accomplished by using a correlated sub query, although an extra WHERE condition is required, which identified that the OrderDate's on different records need to be the same day. This WHERE condition is accomplished by using the CONVERT function to truncate the OrderDate into a MM/DD/YYYY format.
In SQL Server 2012 and up you can just use the Sum windowing function directly against the original table:
SELECT
artranid,
trandate,
type,
checkNumber,
refNumber,
custid,
amount,
taxAmount,
Balance = Sum(amount) OVER (ORDER BY trandate ROWS UNBOUNDED PRECEDING),
postedflag,
modifieddate
FROM
dbo.Sales
;
This will perform very well compared to all solutions and will not have the potential for errors as found in the "quirky update".
Note that you should use the ROWS version when possible; the RANGE version may perform less well.
You can just include a correlated subquery in the select clause. (This will perform poorly for very large result sets) but
Select <other stuff>,
(Select Sum(ColumnVal) From Table
Where OrderColumn <= T.OrderColumn) As RunningTotal
From Table T
Order By OrderColumn
You can do a running count, here is an example, keep in mind that this is actually not that fast since it has to scan the table for every row, if your table is large this can be quite time consuming and costly
create table #Test (id int, Value decimal(16,4))
insert #Test values(1,100)
insert #Test values(2,100)
insert #Test values(3,100)
insert #Test values(4,200)
insert #Test values(5,200)
insert #Test values(6,200)
insert #Test values(7,200)
select *,(select sum(Value) from #Test t2 where t2.id <=t1.id) as SumValues
from #test t1
id Value SumValues
1 100.0000 100.0000
2 100.0000 200.0000
3 100.0000 300.0000
4 200.0000 500.0000
5 200.0000 700.0000
6 200.0000 900.0000
7 200.0000 1100.0000
On SQLTeam there's also an article about calculating running totals. There is a comparison of 3 ways to do it, along with some performance measuring:
using cursors
using a subselect (as per SQLMenace's post)
using a CROSS JOIN
Cursors outperform by far the other solutions, but if you must not use cursors, there's at least an alternative.
That that SELECT #nvcConcatonated bit is only returning a single concatenated value. (Although it's computing the intermediate values on a per-row basis, you're only able to retrieve the final value).
So, I think the answer is no. If you wanted a single final sum value you would of course just use SUM.
I'm not saying you can't do it, I'm just saying you can't do it using this 'trick'.
Note that using a variable to accomplish this such as in the following may fail in a multiprocessor system because separate rows could get calculated on different processors and may end up using the same starting value. My understanding is that a query hint could be used to force it to use a single thread, but I do not have that information handy.
UPDATE #SalesTbl
SET #RunningTotal = RunningTotal = #RunningTotal + Sales
FROM #SalesTbl
Using one of the other options (a cursor, a window function, or nested queries) is typically going to be your safest bet for reliable results.
select TransactionDate, amount, amount + (sum x.amount from transactions x where x.TransactionDate < Transactions) Runningtotal from Transactions
where
x.TransactionDate < Transactions
could be any condition that will represent all the previous records aside from the current one