SQL statement to update a column - sql

I have a Table T1 with following values
I need a result table with additional column which is the average of upto date.
i.e.,
x1= 1000.45
x2= (1000.45+2000.00)/2
x3= (1000.45+2000.00+3000.50)/3
x4= (1000.45+2000.00+3000.50+4000.24)/4
The result table should look like the following:
I need to write SQL statement in Oracle database to add a column to result table with column values x1, x2, x3, x4.

You need to use an analytic function for this. My untested SQL is as follows:
SELECT
date,
division,
sum_sales,
AVG( sum_sales ) OVER ( ORDER BY date ROWS UNBOUNDED PRECEDING )
FROM
table;
date is a reserved word in Oracle, so if you are using that as your real column name you will need to include it in quotes.

select date,division,sum_sales,avg(sum_sales) over ( order by sum_sales ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
from table
group by date,division,sum_sales

You need to use AVG function OVER ordering by date. As each row is an aggregation result of all the preceding rows, you need to define the window of the aggregation as UNBOUNDED PRECEDING
By following these guidelines, the resultant statement would be like:
SELECT date_d,
division,
sum_sales,
AVG(sum_sales)
over (
ORDER BY date_d ROWS unbounded preceding ) avrg
FROM supplier;
You can test that in FIDDLE
Good two pieces of information about analytical functions in these two articles:
Introduction to Analytic Functions (Part 1)
Introduction to Analytic Functions (Part 2)

Related

How to get the preceding values in Redshift based on Where condition?

I have three columns a student_name, column_1, column_2. I want to print the preceding value wherever the 0's exist in column 2.
I want the output like the below one, I used lag function. Probably I might be using it the wrong way.
From what I can tell, you want to count the number of 0 values up to and including each row. If this interpretation is correct, you would use a conditional cumulative sum:
select t.*,
sum( (column1 = 0)::int ) over (partition by student
order by <ordering column>
rows between unbounded preceding and current row
)
from t;
Note: This assumes that you have an ordering column which you have not included in the question.

Count()over() have repeated records

I often use sum() over() to calculate cumulative value,but today,I tried count ()over(),the result is out of my expectation,can someone explain why the result have repeated records on the same day?
I know the regular way is to count (distinct I'd) group by date,and then sum()over(order by date),just curious for the result of "count(id)over(order by date)"
Select pre.date,count(person_id) over (order by pre.date)
From (select distinct person_id, date from events) pre
The result will be repeated records for the same day.
Because your outer query has not filtered or aggregated the results from the inner query. It returns the same number of rows.
You want aggregation:
select pre.date, count(*) as cnt_on_date,
sum(count(*)) over (order by pre.date) as running_count
from (select distinct person_id, date from events) pre
group by pre.date;
Almost all analytical functions, except row_number() which comes to mind, do not differentiate ties for the same value of columns in order by clause. In some documentation it is stated directly:
Oracle
If you specify a logical window with the RANGE keyword, then the function returns the same result for each of the rows
Postgresql
By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause.
My SQL
With 'ORDER BY': The default frame includes rows from the partition start through the current row, including all peers of the current row (rows equal to the current row according to the ORDER BY clause).
But in general, the addition of ORDER BY in analytical clause implicitly sets window specification to RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. As window calculation is made for each row in the defined window, with default to RANGE rows with the same value of ORDER BY columns will come into the same window and will produce the same result. So to have a real running total, there should be ROWS BETWEEN or more detail column in ORDER BY part of analytic clause. Functions that does not support windowing clause are exception of this rule, but it sometimes not documented directly, so I will not try to list them here. Functions that can be used as aggregate are not exception in general and produce the same value.

Best way to get 1st record per partition: FIRST_VALUE vs ROW_NUMBER

I am looking for the fastest way to get the 1st record (columns a,b,c ) for every partition (a,b) using SQL. Table is ~10, 000, 000 rows.
Approach #1:
SELECT * FROM (
SELECT a,b,c,
ROW_NUMBER() OVER ( PARTITION by a, b ORDER BY date DESC) as row_num
FROM T
) WHERE row_num =1
But it probably does extra work behind the scene - I need only 1st row per partition.
Approach #2 using FIRST_VALUE(). Since FIRST_VALUE() returns expression
let pack/concatenate a,b,c using some separator into single expression, e.g.:
SELECT FIRST_VALUE(a+','+'b'+','+c)
OVER ( PARTITION by a, b ORDER BY date DESC rows unbounded preceding) FROM T
But in this case I need to unpack the result, which is extra step.
Approach #3 using FIRST_VALUE() - repeat OVER (...) for a , b :
SELECT
FIRST_VALUE(a)
OVER ( PARTITION by a, b ORDER BY date DESC rows unbounded preceding),
FIRST_VALUE(b)
OVER ( PARTITION by a, b ORDER BY date DESC rows unbounded preceding),
c
FROM T
In approach #3 I do not know if database engine (Redshift) smart enough to partition only once
The first query is different from the other two. The first only returns one row per group. The other two return the same rows as in the original query.
You should use the version that does what you want, which I presume is the first one. If you add select distinct or group by to the other queries, that will probably add overhead that will make them slower -- but you can test on your data to see if that is true.
Your intuition is correct that the first query does unnecessary work. In databases that support indexes fully, a correlated subquery is often faster. I don't think that would be the case in Redshift, however.

Cumulating value of previous row in Column FINAL_VALUE

My table name is "fundt" and my question is:
how to cumulative sum of previous row in Column FINAL_VALUE?"
I think it possible with cross join but I don't know how.
I suspect that you want window functions with a window frame:
select
t.*,
sum(final_value) over(
order by it_month
rows between unbounded preceding and 1 preceding
) cumulative_final_value
from mytable t
This gives you a cumulative sum() of previous rows (not including the current row), using column it_month for ordering. You might need to adapt that to your exact requirement, but this seems to be the logic that you are looking for.

Oracle LAST_VALUE only with order by in analytic clause

I have schema (Oracle 11g R2):
CREATE TABLE users (
id INT NOT NULL,
name VARCHAR(30) NOT NULL,
num int NOT NULL
);
INSERT INTO users (id, name, num) VALUES (1,'alan',5);
INSERT INTO users (id, name, num) VALUES (2,'alan',4);
INSERT INTO users (id, name, num) VALUES (3,'julia',10);
INSERT INTO users (id, name, num) VALUES (4,'maros',77);
INSERT INTO users (id, name, num) VALUES (5,'alan',1);
INSERT INTO users (id, name, num) VALUES (6,'maros',14);
INSERT INTO users (id, name, num) VALUES (7,'fero',1);
INSERT INTO users (id, name, num) VALUES (8,'matej',8);
INSERT INTO users (id, name, num) VALUES (9,'maros',55);
And i execute following queries - using LAST_VALUE analytic function only with ORDER BY analytic clause :
My assumption is that this query executes over one partition - whole table (as partition by clause is missing). It will sort rows by name in given partition (whole table) and it will use default windowing clause RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.
select us.*,
last_value(num) over (order by name) as lv
from users us;
But the query executed above will give exactly same results as following one. My assumption concerning second query is that this query firstly partition table rows by name then sort rows in every partition by num and then apply windowing clause RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING over each partition to get LAST_VALUE.
select us.*,
last_value(num) over (partition by name order by num RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lv
from users us;
One of my assumption is clearly wrong because two above mentioned queries give the same result. It looks like the first query orders records also by num behind curtains. Could you please suggest what is wrong with my assumptions and why these queries return same results?
The answer is simple. For whatever reason, Oracle chose to make LAST_VALUE deterministic when a logical (RANGE) offset is used in the windowing clause (explicitly or implicitly - by default). Specifically, in such cases, the HIGHEST value of the measured expression is selected from among a set of rows tied by the order by sorting.
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/LAST_VALUE.html#GUID-A646AF95-C8E9-4A67-87BA-87B11AEE7B79
Towards the bottom of that page in the Oracle documentation, we can read:
When duplicates are found for the ORDER BY expression, the LAST_VALUE
is the highest value of expr [...]
Why does the documentation say that in the examples section, and not in the explanation of the function? Because, as is very often the case, the documentation doesn't seem to be written by qualified people.
From this blog in Oracle magazine, here is what happens if you use an ORDER BY clause in a window function without specifying anything else:
An ORDER BY clause, in the absence of any further windowing clause parameters, effectively adds a default windowing clause: RANGE UNBOUNDED PRECEDING, which means, “The current and previous rows in the current partition are the rows that should be used in the computation.” When an ORDER BY clause isn’t accompanied by a PARTITION clause, the entire set of rows used by the analytic function is the default current partition.
So, your first query is actually the same as this:
SELECT us.*, LAST_VALUE(num) OVER (ORDER BY name RANGE UNBOUNDED PRECEDING) AS lv
FROM users us;
If you run the above query, you will get the current behavior you are seeing, which will return a separate last value for each name. This differs from the following query:
SELECT
us.*,
LAST_VALUE(num) OVER (ORDER BY name
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS lv
FROM users us;
This just generates the value 8 for the last value of num, which corresponds to the value for matej, who is the last name when sorting name ascending.
Here is a db<>fiddle, in case anyone wants to play with them.
Let me assume that you think that the second query is returning the correct results.
select us.*,
last_value(num) over (partition by name
order by num
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) as lv
from users us;
Let me also point out that this is more succinctly written as:
select us.*,
max(num) over (partition by name
order by num
) as lv
from users us;
That is irrelevant to your question, but I want to point it out.
Now, why does this give the same results?
select us.*,
last_value(num) over (order by name) as lv
from users us;
Well, with no windowing clause, this is equivalent to:
select us.*,
last_value(num) over (order by name
range between unbounded preceding and current row
) as lv
from users us;
The range is very important here. It does not go to the current row. It goes to all rows with the same value in name.
In my understanding of the documentation around order by, any num value from rows with the same name could be chosen. Why? Sorting in SQL (and in Oracle) is not stable. That means that it is not guaranteed to preserve the original ordering of the rows.
In this particular case, it might be coincidence that the last value happens to be the largest value. Or, for some reason Oracle might be adding num to the ordering for some reason.