In the following table
----------------------------
| id | day | count |
----------------------------
1 2013-01-01 10
1 2013-01-05 20
1 2013-01-08 45
the second and third row the count column is cumulative i.e. 20 = (10 from first row + 10 additional count) and 45 ( 20 from second row + 25 additional count). How can the second and third rows (and further) be inserted with cumulative add in Postgresql ?
Note: The additional count is read from a variable in a program. So the aim is to store this value in the 'count' column in Postgresql, but also add it with the 'count' found by last entry in ascending date order.
Since you don't say where does the additional count come from I assume there is an additional count column:
select *,
sum(additional_count) over(order by "day") "count"
from t
order by "day"
The sum function as a window function does a running total. It is a window function when it uses the over clause.
If the problem is how the insert statement with a select could look like:
insert into x(id, day, count)
select 1, current_timestamp,
coalesce((select max(count) from x), 0) + 10;
But this is not necessarily the best way to solve the problem.
Related
I would like to run the below query that looks like this for week 1:
Select week(datetime), count(customer_call) from table where week(datetime) = 1 and week(orderdatetime) < 7
... but for weeks 2, 3, 4, 5 and 6 all in one query and with the 'week(orderdatetime)' to still be for the 6 weeks following the week(datetime) value.
This means that for 'week(datetime) = 2', 'week(orderdatetime)' would be between 2 and 7 and so on.
'datetime' is a datetime field denoting registration.
'customer_call' is a datetime field denoting when they called.
'orderdatetime' is a datetime field denoting when they ordered.
Thanks!
I think you want group by:
Select week(datetime), count(customer_call)
from table
where week(datetime) = 1 and week(orderdatetime) < 7
group by week(datetime);
I would also point out that week doesn't take the year into account, so you might want to include that in the group by or in a where filter.
EDIT:
If you want 6 weeks of cumulative counts, then use:
Select week(datetime), count(customer_call),
sum(count(customer_call)) over (order by week(datetime)
rows between 5 preceding and current row) as running_sum_6
from table
group by week(datetime);
Note: If you want to filter this to particular weeks, then make this a subquery and filter in the outer query.
I have a time series in a SQLite Database and want to analyze it.
The important part of the time series consists of a column with different but not unique string values.
I want to do something like this:
Value concat countValue
A A 1
A A,A 1
B A,A,B 1
B A,B,B 2
B B,B,B 3
C B,B,C 1
B B,C,B 2
I don't know how to get the countValue column. It should count all Values of the partition equal to the current rows Value.
I tried this but it just counts all Values in the partition and not the Values equal to this rows Value.
SELECT
Value,
group_concat(Value) OVER wind AS concat,
Sum(Case When Value Like Value Then 1 Else 0 End) OVER wind AS countValue
FROM TimeSeries
WINDOW
wind AS (ORDER BY date ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)
ORDER BY
date
;
The query is also limited by these factors:
The query should work with any amount of unique Values
The query should work with any Partition Size (ROWS BETWEEN n PRECEDING AND CURRENT ROW)
Is this even possible using only SQL?
Here is an approach using string functions:
select
value,
group_concat(value) over wind as concat,
(
length(group_concat(value) over wind) - length(replace(group_concat(value) over wind, value, ''))
) / length(value) cnt_value
from timeseries
window wind as (order by date rows between 2 preceding and current row)
order by date;
I have table which updates on weekly basis, I need to check count variation check between one week and previous week values. I just did below....
Select
case when F.wk_end_d=max(F.wk_end_d) over (partition by F.wk_end_d)then F.the_count end as count
from
(
select wk_end_d, count(*) as the_count
from table A
where wk_end_d between date_sub('2019-03-02',7) and '2019-03-02'
group by wk_end_d
) F
which give me value like below
100
200
but I need get value like 100 200 on 2 different columns as I need build some other calculations on top of it.
I am trying to insert a value and calculating the value that i inserted into one column.
the example, basically I have one table with 3 column on it, id, amount, total:
id | amount | total
1 | 100 | 100
2 | 200 | 300
3 | -100 | 200
expected result:
every time a new amount value is entered, I want the value in column total to be added with that value
INSERT INTO public.tb_total_amount
(id, amount, total, create_time)
VALUES(1, 100, balance+amount, NOW());
is it ok to accumulate the negative value? and,
anyone can correct my query? thank you
I recommend against doing this, and I instead suggest just using SUM as an analytic function:
SELECT
id,
amount,
SUM(amount) OVER (ORDER BY id) total
FROM yourTable;
The logic behind this answer is that your rolling sum total is derived, and not original, data. Therefore, it is better to just compute it on the fly when you need it, rather than storing it.
If you really want to insert the correct total during a single insert, you may try:
INSERT INTO public.tb_total_amount (amount, total, create_time)
SELECT
100,
(SELECT COALESCE(amount, 0) FROM public.tb_total_amount
ORDER BY id DESC LIMIT 1) + 100,
NOW();
I have a table that has "months" as columns and "customer ID" as primary key.
I want to average all the values for each month separately for values not equal to 99999.
My current query for a single month is as follows and is working fine:
SELECT Avg([Table1]![Dec10]) AS Expr1
FROM Table1
WHERE ((([Table1]![Dec10])<>99999);
However, when I am trying to add the 2nd month, it is combining the first month's condition with the 2nd month's condition.
SELECT Avg([Table1]![Dec10) AS Expr1, Avg([Table1]![Dec11]) AS Expr2
FROM Table1
WHERE ((([Table1]![Dec10])<>99999) And ([Table1]![Dec11])<>99999);
I need to have each month separate, i.e. calculate the average of Dec10<>99999, and in the second column, calculate the average of Dec11<>99999.
You need to use a Group By clause in your query, and then you can separate your output by months.
In this case it would be convenient to use use GROUP BY.
If you have distinct month values e.g. "jan10", "feb10", "mar12" etc. you can group on the months, and then check that the values is not 99999.
SELECT avg(value), months
FROM tablename
WHERE value <> 99999
GROUP BY months
That is if you have the months stored as literals within a column, but from your database design this may be stored in an other way?
I need to have each month separate, i.e. calculate the average of Dec10<>99999, and in the second column, calculate the average of Dec11<>99999.
In Access 2010, for [Table1]...
CustomerID Dec10 Dec11
---------- ----- -----
1 1 5
2 2 99999
3 99999 0
4 3 7
...the query...
SELECT
DAvg("Dec10", "Table1", "Dec10<>99999") AS AvgOfDec10,
DAvg("Dec11", "Table1", "Dec11<>99999") AS AvgOfDec11
FROM (SELECT COUNT(*) AS n FROM Table1)
...produces:
AvgOfDec10 AvgOfDec11
---------- ----------
2 4