I am trying to extract first row per date from a balance table. and I am trying to write an sql code but I cant get a clue from how can I do it..
I tried max, sum, group by.. but its not helping out.
Date Account Balance
4/6/2019 A 90
4/5/2019 B 80
4/4/2019 C 70
4/3/2019 C 60
4/2/2019 D 80
4/1/2019 D 100
So how can I make a query which will show the following results?
Account Balance in April
Account Balance
A 90
B 80
C 70
D 80
use analytic function first_value if your dbms support
select Account,
FIRST_VALUE(balance) OVER (partition by Account ORDER BY date desc) AS balance
from table_name
First group by account to get the max date for each account and then join to the table:
select t.acount, t.balance
from tablename t inner join (
select account, max(date) maxdate
from tablename
group by account
) g on g.account = t.account and g.maxdate = t.date
you could use a sunquery for max date
select account, balance
from (
select accont, balance, max(date)
from my_table
group by accont, balance
) t
A canonical method would be filtering in the where clause:
select b.*
from balances b
where b.date = (select max(b2.date)
from balances b2
where b2.account = b.account and
b2.date >= '2019-04-01' and
b2.date < '2019-05-01'
);
Specific databases may have other approaches to this problem. The above generally has very good performance, particularly with an index on balances(account, date).
Related
I have a table with the name of each customer and date columns and want to write a query to give me the number of gap days for each user,
name date
ali 2022-01-01
ali 2022-01-04
ali 2022-01-05
ser 2022-03-01
the answer should be 3 for ali and for ser will be null.
here is what I tried:
select name ,min(date) over (partition by name order by date) start_date , max(date) over (partition by name order by date) end_date from table
One approach to achieve this is using a window function (like lag, lead) to find the prior/next day and then find the difference between the dates (current and prior, for example ) using datediff function. Something like this..
SELECT name,
MAX(datediff(date, PreviousDate)) AS Gap
FROM (SELECT name,
date,
LAG(date) OVER(PARTITION BY name ORDER BY date) as PreviousDate
FROM table t
GROUP BY name
my approach is to match every record with the closest date then find the maximum gap and left join with the original table to get the gap for each user.
here's MySQL version:
select
cu.name, max(cg.gap) maxgap
from
customers cu left join
(
select
c.name, datediff(min(cn.date), c.date) gap
from
customers c left join customers cn on c.name = cn.name
where
cn.date > c.date
group by
c.name, c.date
) cg
on cu.name = cg.name
group by
cu.name
So, the query is simple but i am facing issues in implementing the Sql logic. Heres the query suppose i have records like
Phoneno Company Date Amount
83838 xyz 20210901 100
87337 abc 20210902 500
47473 cde 20210903 600
Output expected is past 7 days progress as running avg of amount for each date (current date n 6 days before)
Date amount avg
20210901 100 100
20210902 500 300
20210903 600 400
I tried
Select date, amount, select
avg(lg) from (
Select case when lag(amount)
Over (order by NULL) IS NULL
THEN AMOUNT
ELSE
lag(amount)
Over (order by NULL) END AS LG)
From table
WHERE DATE>=t.date-7) as avg
From table t;
But i am getting wrong avg values. Could anyone please help?
Note: Ive tried without lag too it results the wrong avgs too
You could use a self join to group the dates
select distinct
a.dt,
b.dt as preceding_dt, --just for QA purpose
a.amt,
b.amt as preceding_amt,--just for QA purpose
avg(b.amt) over (partition by a.dt) as avg_amt
from t a
join t b on a.dt-b.dt between 0 and 6
group by a.dt, b.dt, a.amt, b.amt; --to dedupe the data after the join
If you want to make your correlated subquery approach work, you don't really need the lag.
select dt,
amt,
(select avg(b.amt) from t b where a.dt-b.dt between 0 and 6) as avg_lg
from t a;
If you don't have multiple rows per date, this gets even simpler
select dt,
amt,
avg(amt) over (order by dt rows between 6 preceding and current row) as avg_lg
from t;
Also the condition DATE>=t.date-7 you used is left open on one side meaning it will qualify a lot of dates that shouldn't have been qualified.
DEMO
You can use analytical function with the windowing clause to get your results:
SELECT DISTINCT BillingDate,
AVG(amount) OVER (ORDER BY BillingDate
RANGE BETWEEN TO_DSINTERVAL('7 00:00:00') PRECEDING
AND TO_DSINTERVAL('0 00:00:00') FOLLOWING) AS RUNNING_AVG
FROM accounts
ORDER BY BillingDate;
Here is a DBFiddle showing the query in action (LINK)
I have 1 table named "Transactions" with the following columns: Date, ClientID, Amount.
I would like to have the active clients in the last 30 days.
Something like this:
Date | Active_Clients
2017/08/10 | 697
2017/08/11 | 710
2017/08/12 | 689
etc
Meaning: From 2017/08/10 minus 30 days to 2017/08/10 I had 697 active users.
I tryed many ways and didnt make it.
One method looks something like this:
select d.dte, count(distinct t.clientid)
from (select '2017-08-10' as dte union all
select '2017-08-11' as dte union all
select '2017-08-12' as dte
) d left join
transactions t
on t.date <= d.dte and t.date > d.dte - interval '30' day
group by d.dte
order by d.dte;
The exact syntax for date constants, date arithmetic, and subqueries with constant values differs by database.
Thank you guys for your help.
I think I managed to find the answer with #gordonlinoff 's answer.
select distinct data
, (select count(distinct id_cliente) from appvitaminas..transacoes B
where b.Data between DATEADD(day,-30,a.Data) and A.Data) Active_Clients
from appvitaminas..transacoes A
You can try this code to:
select distinct [Date]
, (select count(distinct B.clientid) from Transactions B
where b.Date between DATEADD(day,-30,a.Date) and A.Date) Active_Clients
from Transactions A
I have data similar to this:
Price DateChanged Product
10 2012-01-01 A
12 2012-02-01 A
30 2012-03-01 A
10 2012-09-01 A
12 2013-01-01 A
110 2012-01-01 B
112 2012-02-01 B
130 2012-03-01 B
110 2012-09-01 B
112 2013-01-01 B
I want to calculate average value, but the challenge is this:
Look at the first record, price 10 is valid for a duration of one month, price 12 is valid for a duration of one month while price 30 is valid for a duration of six months.
So, a basic average for product A (10+12+30+10+12)/5 would result in 14.8 while taking duration in to account then the average price would be ~20.1.
What is the best approach to solve this?
I know I could create a sub-query with a row_number() to join against to calculate a duration, but is there a better way? SQL Server has powerful features like STDistance, so surely there is a function for this?
What you are looking for is called weighted average, and AFAIK, there is no built-in function in SQL Server that calculates it for you. However, is not that hard to calculate it by hand.
First, you need to find the weight of each data point, in this case, you need to find the duration of each price period. You might have some additional columns in your data that could enable easier lookup, but you could do it like this as well:
SELECT p1.Product, p1.Price, p1.DateChanged AS DateStart,
isnull(min(p2.DateChanged),getdate()) AS DateEnd
INTO #PricePlanStartEnd
FROM PricePlan p1
LEFT OUTER JOIN PricePlan p2
ON p1.DateChanged < p2.DateChanged
AND p1.Product =p2.Product
GROUP BY p1.Product, p1.Price, p1.DateChanged
ORDER BY p1.Product, p1.DateChanged
This creates a #PricePlanStartEnd temporary table that has the start and the end of each price period. I've used getdate() as the end of the current time period. If you need to just calculate an average up to the last price change, just use INNER JOIN instead of the LEFT OUTER JOIN.
After that you just need to divide the sum of (price * period) by the total length of the period, and get the answer.
Here is an SQL Fiddle with the calculation
Also when your working with months, you must remember that not all months are equal, so the price for December was active longer than it was for February.
Using CTE and row_number() to get monthly average up to the last dateChanged. Fiddle-Demo
;with cte as (
select product, dateChanged, price,
row_number() over (partition by product order by datechanged) rn
from x
)
select t1.product,
sum(t1.price *1.0 * datediff(month, t1.dateChanged,t2.dateChanged))/12 monthlyAvg
from cte t1 join cte t2 on t1.product = t2.product
and t1.rn +1 = t2.rn
group by t1.product
--Results
Product MonthlyAvg
A 20.166666
B 120.166666
OR if you need up to date daily average then use a LEFT JOIN Fiddle-Demo;
;with cte as (
select product, dateChanged, price,
row_number() over (partition by product order by datechanged) rn
from x
)
select t1.product,
sum(t1.price *1.0 *
datediff(day, t1.dateChanged,isnull(t2.dateChanged,getdate())))/365 dailyAvg
from cte t1 left join cte t2 on t1.product = t2.product
and t1.rn +1 = t2.rn
group by t1.product
--Results
product dailyAvg
A 21.386301
B 130.975342
I've been mulling on this problem for a couple of hours now with no luck, so I though people on SO might be able to help :)
I have a table with data regarding processing volumes at stores. The first three columns shown below can be queried from that table. What I'm trying to do is to add a 4th column that's basically a flag regarding if a store has processed >=$150, and if so, will display the corresponding date. The way this works is the first instance where the store has surpassed $150 is the date that gets displayed. Subsequent processing volumes don't count after the the first instance the activated date is hit. For example, for store 4, there's just one instance of the activated date.
store_id sales_volume date activated_date
----------------------------------------------------
2 5 03/14/2012
2 125 05/21/2012
2 30 11/01/2012 11/01/2012
3 100 02/06/2012
3 140 12/22/2012 12/22/2012
4 300 10/15/2012 10/15/2012
4 450 11/25/2012
5 100 12/03/2012
Any insights as to how to build out this fourth column? Thanks in advance!
The solution start by calculating the cumulative sales. Then, you want the activation date only when the cumulative sales first pass through the $150 level. This happens when adding the current sales amount pushes the cumulative amount over the threshold. The following case expression handles this.
select t.store_id, t.sales_volume, t.date,
(case when 150 > cumesales - t.sales_volume and 150 <= cumesales
then date
end) as ActivationDate
from (select t.*,
sum(sales_volume) over (partition by store_id order by date) as cumesales
from t
) t
If you have an older version of Postgres that does not support cumulative sum, you can get the cumulative sales with a subquery like:
(select sum(sales_volume) from t t2 where t2.store_id = t.store_id and t2.date <= t.date) as cumesales
Variant 1
You can LEFT JOIN to a table that calculates the first date surpassing the 150 $ limit per store:
SELECT t.*, b.activated_date
FROM tbl t
LEFT JOIN (
SELECT store_id, min(thedate) AS activated_date
FROM (
SELECT store_id, thedate
,sum(sales_volume) OVER (PARTITION BY store_id
ORDER BY thedate) AS running_sum
FROM tbl
) a
WHERE running_sum >= 150
GROUP BY 1
) b ON t.store_id = b.store_id AND t.thedate = b.activated_date
ORDER BY t.store_id, t.thedate;
The calculation of the the first day has to be done in two steps, since the window function accumulating the running sum has to be applied in a separate SELECT.
Variant 2
Another window function instead of the LEFT JOIN. May of may not be faster. Test with EXPLAIN ANALYZE.
SELECT *
,CASE WHEN running_sum >= 150 AND thedate = first_value(thedate)
OVER (PARTITION BY store_id, running_sum >= 150 ORDER BY thedate)
THEN thedate END AS activated_date
FROM (
SELECT *
,sum(sales_volume)
OVER (PARTITION BY store_id ORDER BY thedate) AS running_sum
FROM tbl
) b
ORDER BY store_id, thedate;
->sqlfiddle demonstrating both.