I have the following mssql table:
+-------------------------+---+---+---+---+---+
| date | A | B | C | D | E |
+-------------------------+---+---+---+---+---+
| 2017-02-02 00:00:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:01:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:02:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:03:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:04:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:05:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:06:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:07:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:08:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:09:00.000 | 1 | 0 | 0 | 1 | 0 |
+-------------------------+---+---+---+---+---+
I need to write a query that changes the 0s to 1s on column D if the state of D goes to zero for less than 5 minutes. In other words I need to "bridge" the two consecutive 1s at the extremities of the 0s if the state 0 is smaller than ten mins.
Is it possible to perform this operation using T-SQL (SQL SERVER 2014)?
Thank you.
Example1:
+-------------------------+---+---+---+---+---+
| date | A | B | C | D | E |
+-------------------------+---+---+---+---+---+
| 2017-02-02 00:00:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:01:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:02:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:03:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:04:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:05:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:06:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:07:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:08:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:09:00.000 | 1 | 0 | 0 | 1 | 0 |
+-------------------------+---+---+---+---+---+
The query should return
+-------------------------+---+---+---+---+---+
| date | A | B | C | D | E |
+-------------------------+---+---+---+---+---+
| 2017-02-02 00:00:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:01:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:02:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:03:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:04:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:05:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:06:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:07:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:08:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:09:00.000 | 1 | 0 | 0 | 1 | 0 |
+-------------------------+---+---+---+---+---+
example 2:
+-------------------------+---+---+---+---+---+
| date | A | B | C | D | E |
+-------------------------+---+---+---+---+---+
| 2017-02-02 00:00:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:01:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:02:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:03:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:04:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:05:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:06:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:07:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:08:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:09:00.000 | 1 | 0 | 0 | 1 | 0 |
+-------------------------+---+---+---+---+---+
the query should return
+-------------------------+---+---+---+---+---+
| date | A | B | C | D | E |
+-------------------------+---+---+---+---+---+
| 2017-02-02 00:00:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:01:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:02:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:03:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:04:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:05:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:06:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:07:00.000 | 1 | 0 | 0 | 0 | 0 |
| 2017-02-02 00:08:00.000 | 1 | 0 | 0 | 1 | 0 |
| 2017-02-02 00:09:00.000 | 1 | 0 | 0 | 1 | 0 |
+-------------------------+---+---+---+---+---+
UPDATE - You probably got the idea from the original, but I used the wrong aggregate function some of the time; I think I have it untangled now.
So... If a row's value is 0, but the time between the most recent preceding row with a 1 and the earliest subsequent row with a 1 is less than 10 minutes, you want to change that row's value to a 1. And in all other cases you leave the value as is. Right?
The time of the most recent row with a 1 can be expressed as max(case when D = 1 then date end) over (order by date rows unbounded preceding).
Likewise the time of the earliest subsequent row with a 1 can be expressed as min(case when D = 1 then date end) over (order by date rows unbounded following).
Find the interval between them; if the dates are all aligned to an even minute, then you can simply use datediff:
datediff(minute, max(case when D=1 then date end) over (order by date rows unbounded preceding),
min(case when D=1 then date end) over (order by date rows unbounded following))
Then apply case logic.
case when -- the above expression
< 10 then 1 else D end
Related
i have the bellow dataframe
Item_code Type year-month Qty
0 TH-32H400M O Jan-22-Q 0.000000
1 TH-32H400M MPO Jan-22-Q 0.000000
2 TH-32H400M ADJ Jan-22-Q 0.000000
3 TH-32H400M BP_O Jan-22-Q 0.000000
4 TH-32H400M LY_O Jan-22-Q 0.000000
... ... ... ... ...
1795 TH-75JX660M P Jun-23-Q 0.000000
1796 TH-75JX660M S Jun-23-Q 11.538462
1797 TH-75JX660M BP_S Jun-23-Q 0.000000
1798 TH-75JX660M LY_S Jun-23-Q 0.000000
1799 TH-75JX660M I Jun-23-Q 0.769231
When i run the below code i get the desired result but with a few issues,
new_df = new_df.pivot(index=['Item_code','year-month'], columns='Type', values='Qty')
+--------------+------------+----------+------+------+---+-------------+------+-----+-----+-----+-----+
| Item_code | year-month | ADJ | BP_O | BP_S | I | LY_O | LY_S | MPO | O | P | S |
+--------------+------------+----------+------+------+---+-------------+------+-----+-----+-----+-----+
| TH-32GS655M | Apr-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| | Apr-23-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 350 | 350 | 350 |
| | Aug-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| | Dec-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 350 | 0 | 0 |
| | Feb-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| | Feb-23-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 350 | 350 | 350 |
| | Jan-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| ------------ | | | | | | | | | | | | |
| TH-75HX750 | Jan-23-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 350 | 350 | 350 |
| | Jul-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| | Jun-22-Q | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| | Jun-23-Q | 0 | 0 | 0 | 13| 0 | 0 | 0 | 0 | 0 | 1.9 |
+--------------+------------+----------+------+------+---+-------------+------+-----+-----+-----+-----+
Why is "Item code" only not repeated on every row
How to get column name on the same row,
Basically "Type" should not be there and "Item_code" & "year-month" should be first row witht he rest of column names
Thank you for the help.
Maybe this solution will work.
new_df = new_df.pivot(index=['Item_code','year-month'], columns='Type', values='Qty')
new_df = new_df.reset_index().fillna(0)
I have a table in MS SQL that collects the status of each ID in a marketing campaign. In each month, there is a column to check that each consumer ID is in the marketing campaign or not (is_in_programme), if so, in each month, are they newcomers in our programme or not (is_new_apply). Each ID can apply in the programme in multiple times.
My table contains datetime (reported in the last day of every month, with no skipped month), ID, status of each ID as I stated above. And I want to check that in each period, how many time that each ID is in this programme (on EXPECTED column).
In my Output column, I've tried to use the ROW_NUMBER() function that partitioned by id, is_in_programme, is_new_apply when is_in_programme, is_new_apply are both 1. But I cannot check the occurent of each ID when is_new_apply == 0
+------------+-------+-----------------+--------------+--------+----------+
| datetime | ID | is_in_programme | is_new_apply | Output | EXPECTED |
+------------+-------+-----------------+--------------+--------+----------+
| 31/01/2020 | 12345 | 1 | 1 | 1 | 1 |
| 29/02/2020 | 12345 | 1 | 0 | 0 | 1 |
| 31/03/2020 | 12345 | 1 | 0 | 0 | 1 |
| 30/04/2020 | 12345 | 1 | 0 | 0 | 1 |
| 31/05/2020 | 12345 | 0 | 0 | 0 | 0 |
| 30/06/2020 | 12345 | 1 | 1 | 2 | 2 |
| 31/07/2020 | 12345 | 1 | 0 | 0 | 2 |
| 31/08/2020 | 12345 | 1 | 0 | 0 | 2 |
| 31/01/2020 | 67890 | 0 | 0 | 0 | 0 |
| 29/02/2020 | 67890 | 1 | 1 | 1 | 1 |
| 31/03/2020 | 67890 | 1 | 0 | 0 | 1 |
| 30/04/2020 | 67890 | 0 | 0 | 0 | 0 |
| 31/05/2020 | 67890 | 0 | 0 | 0 | 0 |
| 30/06/2020 | 67890 | 1 | 1 | 2 | 2 |
| 31/07/2020 | 67890 | 1 | 0 | 0 | 2 |
| 31/08/2020 | 67890 | 1 | 0 | 0 | 2 |
| 30/09/2020 | 67890 | 0 | 0 | 0 | 0 |
| 31/10/2020 | 67890 | 1 | 1 | 3 | 3 |
| 30/11/2020 | 67890 | 1 | 0 | 0 | 3 |
| 31/12/2020 | 67890 | 1 | 0 | 0 | 3 |
+------------+-------+-----------------+--------------+--------+----------+
Is there any way to check that how many time that each ID is in the marketing campaign in each period like my EXPECTED column?
You seem to want a cumulative sum of is_new_apply when is_in_program is not 0. That would be:
select t.*,
(case when is_in_program <> 0
then sum(is_new_apply) over (partition by id order by datetime)
else 0
end) as expected
from t;
I have this table:
id | date | player_id | score | all_games | all_wins | n_games | n_wins
============================================================================================
6747 | 2018-08-10 | 1 | 0 | 1 | | 1 |
6751 | 2018-08-10 | 1 | 0 | 2 | 0 | 2 |
6764 | 2018-08-10 | 1 | 0 | 3 | 0 | 3 |
6783 | 2018-08-10 | 1 | 0 | 4 | 0 | 4 |
6804 | 2018-08-10 | 1 | 0 | 5 | 0 | 5 |
6821 | 2018-08-10 | 1 | 0 | 6 | 0 | 6 |
6828 | 2018-08-10 | 1 | 0 | 7 | 0 | 7 |
17334 | 2018-08-23 | 1 | 0 | 8 | 0 | 8 | 0
17363 | 2018-08-23 | 1 | 0 | 9 | 0 | 9 | 0
17398 | 2018-08-23 | 1 | 0 | 10 | 0 | 10 | 0
17403 | 2018-08-23 | 1 | 0 | 11 | 0 | 11 | 0
17409 | 2018-08-23 | 1 | 0 | 12 | 0 | 12 | 0
33656 | 2018-09-13 | 1 | 0 | 13 | 0 | 13 | 0
33687 | 2018-09-13 | 1 | 0 | 14 | 0 | 14 | 0
45393 | 2018-09-27 | 1 | 0 | 15 | 0 | 15 | 0
45402 | 2018-09-27 | 1 | 0 | 16 | 0 | 16 | 0
45422 | 2018-09-27 | 1 | 1 | 17 | 0 | 17 | 0
45453 | 2018-09-27 | 1 | 0 | 18 | 1 | 18 | 0
45461 | 2018-09-27 | 1 | 0 | 19 | 1 | 19 | 0
45474 | 2018-09-27 | 1 | 0 | 20 | 1 | 20 | 0
57155 | 2018-10-11 | 1 | 0 | 21 | 1 | 21 | 1
57215 | 2018-10-11 | 1 | 0 | 22 | 1 | 22 | 1
57225 | 2018-10-11 | 1 | 0 | 23 | 1 | 23 | 1
69868 | 2018-10-25 | 1 | 0 | 24 | 1 | 24 | 1
The issue that I now need to solve is that I need n_games to be a rolling count of the last number of games per day, i.e. a user can play multiple games per day, as present it is just the same as row_number(*) OVER all_games
The other issues is that the column n_wins only does a sum(*) of the rolling windows wins for the day, so if a user wins a couple of games early on in day, that will not be added to the n_wins column until the next day.
I have the example DEMO:
I have tried this query
SELECT id,
date,
player_id,
score,
row_number(*) OVER all_races AS all_games,
sum(score) OVER all_races AS all_wins,
row_number(*) OVER last_n AS n_games,
sum(score) OVER last_n AS n_wins
FROM scores
WINDOW
all_races AS (PARTITION BY player_id ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING),
last_n AS (PARTITION BY player_id ORDER BY date ASC RANGE BETWEEN interval '7 days' PRECEDING AND interval '1 day' PRECEDING);
Ideally I need a query that will output something like this table
id | date | player_id | score | all_games | all_wins | n_games | n_wins
============================================================================================
6747 | 2018-08-10 | 1 | 0 | 1 | | 1 |
6751 | 2018-08-10 | 1 | 0 | 2 | 0 | 2 |
6764 | 2018-08-10 | 1 | 0 | 3 | 0 | 3 |
6783 | 2018-08-10 | 1 | 0 | 4 | 0 | 4 |
6804 | 2018-08-10 | 1 | 0 | 5 | 0 | 5 |
6821 | 2018-08-10 | 1 | 0 | 6 | 0 | 6 |
6828 | 2018-08-10 | 1 | 0 | 7 | 0 | 7 |
17334 | 2018-08-23 | 1 | 0 | 8 | 0 | 1 | 0
17363 | 2018-08-23 | 1 | 0 | 9 | 0 | 2 | 0
17398 | 2018-08-23 | 1 | 0 | 10 | 0 | 3 | 0
17403 | 2018-08-23 | 1 | 0 | 11 | 0 | 4 | 0
17409 | 2018-08-23 | 1 | 0 | 12 | 0 | 5 | 0
33656 | 2018-09-13 | 1 | 1 | 13 | 1 | 6 | 0
33687 | 2018-09-13 | 1 | 0 | 14 | 1 | 7 | 1
45393 | 2018-09-27 | 1 | 0 | 15 | 1 | 1 | 1
45402 | 2018-09-27 | 1 | 0 | 16 | 1 | 2 | 1
45422 | 2018-09-27 | 1 | 1 | 17 | 1 | 3 | 1
45453 | 2018-09-27 | 1 | 0 | 18 | 2 | 4 | 2
45461 | 2018-09-27 | 1 | 0 | 19 | 2 | 5 | 2
45474 | 2018-09-27 | 1 | 0 | 20 | 2 | 6 | 1
57155 | 2018-10-11 | 1 | 0 | 21 | 2 | 7 | 1
57215 | 2018-10-11 | 1 | 0 | 22 | 2 | 1 | 1
57225 | 2018-10-11 | 1 | 0 | 23 | 2 | 2 | 1
69868 | 2018-10-25 | 1 | 0 | 24 | 2 | 3 | 1
This dataset contains one ordered timestamp column (A) along with a pair of marker columns (B + C) that represent the start and end of a 'block', what I'm looking to produce is (D).
I've had a hard time of explaining this problem to colleagues, but essentially I need a way of giving an ID to these blocks of varying row count but note that on row 8 as an example a block can sometimes only occupy one row.
| A | B | C | D |
-----------------------------------------
| 06/10/2018 13:17:40 | 1 | 0 | 1 |
| 06/10/2018 13:17:56 | 0 | 0 | 1 |
| 06/10/2018 13:18:08 | 0 | 1 | 1 |
| 06/10/2018 13:18:21 | 1 | 0 | 2 |
| 06/10/2018 13:18:26 | 0 | 0 | 2 |
| 06/10/2018 13:18:26 | 0 | 0 | 2 |
| 06/10/2018 13:18:28 | 0 | 1 | 2 |
| 06/10/2018 13:18:28 | 1 | 1 | 3 |
| 06/10/2018 13:18:31 | 1 | 0 | 4 |
| 06/10/2018 19:49:26 | 0 | 0 | 4 |
| 06/10/2018 19:50:24 | 0 | 1 | 4 |
You can try to use LAG window function in subquery then use SUM window function with condition aggregate function.
SELECT A,B,C,SUM(CASE WHEN preC = 1 THEN 1 ELSE 0 END) OVER(ORDER BY A,preC) +1 'D'
FROM (
SELECT *,
LAG(C,1,C) OVER(ORDER BY A) preC
FROM T
) t1
sqlfiddle
Result
| A | B | C | D |
-----------------------------------------
| 06/10/2018 13:17:40 | 1 | 0 | 1 |
| 06/10/2018 13:17:56 | 0 | 0 | 1 |
| 06/10/2018 13:18:08 | 0 | 1 | 1 |
| 06/10/2018 13:18:21 | 1 | 0 | 2 |
| 06/10/2018 13:18:26 | 0 | 0 | 2 |
| 06/10/2018 13:18:26 | 0 | 0 | 2 |
| 06/10/2018 13:18:28 | 0 | 1 | 2 |
| 06/10/2018 13:18:28 | 1 | 1 | 3 |
| 06/10/2018 13:18:31 | 1 | 0 | 4 |
| 06/10/2018 19:49:26 | 0 | 0 | 4 |
| 06/10/2018 19:50:24 | 0 | 1 | 4 |
I don't see what C has to do with the problem. This is just a cumulative sum on B:
select a, b, c,
sum(b) over (order by a) as d
from t;
Does anyone know what is the equivalent of the aggregate functions FIRST and LAST from MySQL to Firebird. I have this inventory master table that looks like this:
DATE |ITEM_CODE | BEG | + | - | - | - | + | + | + | + | - | - | END
2015-10-27 | 000000000MS016 |12.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12.5
2015-10-27 | 000000000PN044 | 0 |10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10
2015-10-27 | 000000000VI064 | 440 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 445
2015-10-27 | 000000000VI029 | 274 | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 269
2015-10-28 | 000000000MS016 |12.5 |20 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 32.5
2015-10-28 | 000000000PN044 | 10 |50 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 60
2015-10-28 | 000000000VI064 | 445 | 0 | 0 |10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 435
2015-10-28 | 000000000VI029 | 269 | 0 | 0 | 0 |20 | 0 | 0 | 0 | 0 | 0 | 0 | 249
2015-10-29 | 000000000MS016 |32.5 | 0 |10 | 0 | 0 | 0 | 0 | 0 |30 | 0 | 5 | 47.5
2015-10-29 | 000000000PN044 | 60 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 65
2015-10-29 | 000000000VI064 | 435 | 0 | 0 | 0 | 0 |10 | 0 | 0 | 0 | 8 | 0 | 437
2015-10-29 | 000000000VI029 | 249 |35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 280
2015-10-30 | 000000000MS016 |47.5 | 0 |15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 32.5
2015-10-30 | 000000000PN044 | 65 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 65
2015-10-30 | 000000000VI064 | 437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 437
2015-10-30 | 000000000VI029 | 280 | 0 | 5 | 0 | 5 | 0 | 0 | 6 | 0 | 3 | 0 | 273
and I have this SELECT clause:
SELECT
INV.ITEM_CODE,
FIRST(INV.BEG_QTY) AS BEG_QTY,
SUM(INV.REC_QTY) AS REC_QTY,
SUM(INV.RET_QTY) AS RET_QTY,
SUM(INV.SOLD_QTY) AS SOLD_QTY,
SUM(INV.BO_QTY) AS BO_QTY,
SUM(INV.ADJ_QTY) AS ADJ_QTY,
SUM(INV.COUNT_P) AS COUNT_P,
SUM(INV.COUNT_C) AS COUNT_C,
SUM(INV.TRANS_IN) AS TRANS_IN,
SUM(INV.TRANS_OUT) AS TRANS_OUT,
SUM(INV.DELIVERY) AS DELIVERY,
LAST(INV.END_QTY) AS END_QTY
FROM INV_MASTER INV
WHERE (INV.INV_DATE BETWEEN '2015-10-27' AND '2015-10-31')
GROUP BY INV.ITEM_CODE
ORDER BY INV.ITEM_CODE
and the result SHOULD look like this:
ITEM_CODE | BEG | + | - | - | - | + | + | + | + | - | - | END
000000000MS016 |12.5 |20 |25 | 0 | 0 | 0 | 0 | 0 |30 | 0 | 5 | 32.5
000000000PN044 | 0 |70 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 65
000000000VI064 | 440 | 5 | 0 |10 | 0 |10 | 0 | 0 | 0 | 8 | 0 | 437
000000000VI029 | 274 |35 |10 | 0 |25 | 0 | 0 | 6 | 0 | 3 | 4 | 273
but I'm having a problem with the FIRST and LAST aggregate functions, I'm using firebird v2.5. How can i do this?
You should be able to replace the use of LAST with
(SELECT END_QTY FROM INV_MASTER
WHERE ITEM_CODE = INV.ITEM_CODE
AND INV_DATE = MAX(INV.INV_DATE)) AS END_QTY
This selects the END_QTY of the current item, with the highest date for that item.