I have the following query:
SELECT
Group as [Grupo],
COUNT(*) as [Total]
FROM
Table
WHERE
Status NOT IN ('Closed', 'Cancelled', 'Resolved') AND
DATEDIFF(day,Submit_Date,GETDATE()) > 30
GROUP BY
Group,
DATEDIFF(day,Submit_Date,GETDATE())
The objective is to get tickets with aging above 30 days. The output is:
Group Total
Group A 4
Group A 1
Group A 2
Group A 2
Group B 1
Group B 1
What I'm hoping to see:
Group Total
Group A 9
Group B 2
I might be missing something dumb here... Can someone help me with this one? Thanks
seems like you just need to group by "Group" only:
SELECT
Group as [Grupo],
COUNT(*) as [Total]
FROM
Table
WHERE
Status NOT IN ('Closed', 'Cancelled', 'Resolved') AND
DATEDIFF(day,Submit_Date,GETDATE()) > 30
GROUP BY
Group
You need to fix the GROUP BY. These keys define each row and apparently you want one row per group.
I would also suggest fixing the date logic:
SELECT [Group] as [Grupo], COUNT(*) as [Total]
FROM Table
WHERE Status NOT IN ('Closed', 'Cancelled', 'Resolved') AND
Submit_Date < DATEADD(DAY, -30 CONVERT(DATE, GETDATE()))
GROUP BY [Group];
Avoiding the function call on Submit_Date should help the optimizer produce the best execution plan.
Related
So, the query is simple but i am facing issues in implementing the Sql logic. Heres the query suppose i have records like
Phoneno Company Date Amount
83838 xyz 20210901 100
87337 abc 20210902 500
47473 cde 20210903 600
Output expected is past 7 days progress as running avg of amount for each date (current date n 6 days before)
Date amount avg
20210901 100 100
20210902 500 300
20210903 600 400
I tried
Select date, amount, select
avg(lg) from (
Select case when lag(amount)
Over (order by NULL) IS NULL
THEN AMOUNT
ELSE
lag(amount)
Over (order by NULL) END AS LG)
From table
WHERE DATE>=t.date-7) as avg
From table t;
But i am getting wrong avg values. Could anyone please help?
Note: Ive tried without lag too it results the wrong avgs too
You could use a self join to group the dates
select distinct
a.dt,
b.dt as preceding_dt, --just for QA purpose
a.amt,
b.amt as preceding_amt,--just for QA purpose
avg(b.amt) over (partition by a.dt) as avg_amt
from t a
join t b on a.dt-b.dt between 0 and 6
group by a.dt, b.dt, a.amt, b.amt; --to dedupe the data after the join
If you want to make your correlated subquery approach work, you don't really need the lag.
select dt,
amt,
(select avg(b.amt) from t b where a.dt-b.dt between 0 and 6) as avg_lg
from t a;
If you don't have multiple rows per date, this gets even simpler
select dt,
amt,
avg(amt) over (order by dt rows between 6 preceding and current row) as avg_lg
from t;
Also the condition DATE>=t.date-7 you used is left open on one side meaning it will qualify a lot of dates that shouldn't have been qualified.
DEMO
You can use analytical function with the windowing clause to get your results:
SELECT DISTINCT BillingDate,
AVG(amount) OVER (ORDER BY BillingDate
RANGE BETWEEN TO_DSINTERVAL('7 00:00:00') PRECEDING
AND TO_DSINTERVAL('0 00:00:00') FOLLOWING) AS RUNNING_AVG
FROM accounts
ORDER BY BillingDate;
Here is a DBFiddle showing the query in action (LINK)
The question is:
Show customer's transaction distribution for completed RIDE orders between 1st - 10th of April 2018 (Distribution of customers that have done 1 transaction, 2, 3,4,etc)
And the preview of table that's querying is:
My query is:
SELECT customer_no, COUNT(*) AS total_transaction FROM [bi-dwhdev-01:source.daily_order]
WHERE DATE(order_time) >= '2018-04-01'AND DATE(order_time) <= '2018-04-10'
GROUP BY customer_no
ORDER BY total_transaction DESC;
I'm wondering how to get a distribution in Bigquery(either Legacy or Standard)?
Thanks in advance!
I think you want two levels of aggregation:
SELECT total_transaction, COUNT(*)
FROM (SELECT customer_no, COUNT(*) AS total_transaction
FROM [bi-dwhdev-01:source.daily_order]
WHERE DATE(order_time) >= '2018-04-01' AND DATE(order_time) <= '2018-04-10'
GROUP BY customer_no
) c
GROUP BY total_transaction
ORDER BY total_transaction DESC;
I have data of this form:
user_id event started ended date
1 started 1 0 3/1/2018
1 ended 0 1 3/2/2018
2 started 1 0 3/5/2018
2 ended 0 1 3/22/2018
3 started 1 0 3/25/2018
There are other events and columns for 0/1 but they are irrelevant.
I am trying to get how long it takes each user to get from started to ended.
I tried datediff(day, case when started=1 then date end, case when ended=1 then date end) but since they are on different rows it doesnt work. Something along the lines of datediff over() could work, but that is obviously not a valid function.
Thanks in advance!
Assuming that you can't end before you started, you simply need MIN & MAX as Windowed Aggregates:
select user_id,
datediff(day,
min(date) over (partition by user_id),
max(date) over (partition by user_id))
from myTable
where event in ('started', 'ended')
Using this you can add any additional columns, too.
If one result row is also ok, you can do simple aggregation:
select user_id,
min(date) as started,
max(date) as ended,
datediff(day,
min(date),
max(date)) as duration
from myTable
where event in ('started', 'ended')
group by user_id
You could inner join the table on itself using the user_id column:
SELECT a.[user_id]
, a.[date] AS StartDate
, b.EndDate
, DATEDIFF(DAY, a.[date], b.EndDate) AS DateDifference
FROM dbo.TableNameHere AS a
INNER JOIN
(
SELECT [user_id]
, [date] AS EndDate
FROM dbo.TableNameHere
WHERE [ended] = 1
) AS b
ON a.[user_id] = b.[user_id]
WHERE a.[started] = 1
In my example above, you don't really need any of the columns in the first SELECT besides the DateDifference, I just had them for visibility in my testing.
I have 1 table named "Transactions" with the following columns: Date, ClientID, Amount.
I would like to have the active clients in the last 30 days.
Something like this:
Date | Active_Clients
2017/08/10 | 697
2017/08/11 | 710
2017/08/12 | 689
etc
Meaning: From 2017/08/10 minus 30 days to 2017/08/10 I had 697 active users.
I tryed many ways and didnt make it.
One method looks something like this:
select d.dte, count(distinct t.clientid)
from (select '2017-08-10' as dte union all
select '2017-08-11' as dte union all
select '2017-08-12' as dte
) d left join
transactions t
on t.date <= d.dte and t.date > d.dte - interval '30' day
group by d.dte
order by d.dte;
The exact syntax for date constants, date arithmetic, and subqueries with constant values differs by database.
Thank you guys for your help.
I think I managed to find the answer with #gordonlinoff 's answer.
select distinct data
, (select count(distinct id_cliente) from appvitaminas..transacoes B
where b.Data between DATEADD(day,-30,a.Data) and A.Data) Active_Clients
from appvitaminas..transacoes A
You can try this code to:
select distinct [Date]
, (select count(distinct B.clientid) from Transactions B
where b.Date between DATEADD(day,-30,a.Date) and A.Date) Active_Clients
from Transactions A
transaction_date is in a date format.
What I'm actually trying to output is the COUNT DISTINCT of Unique_ID by quarter (i.e., how many times did a Unique_Id appear in a given quarter).
SELECT transaction_date ,
UNIQUE_ID,
FROM panel
WHERE (some criteria = 'x')
GROUP BY UNIQUE_ID
try this :
SELECT datepart(quarter,transaction_date),
count(distinct UNIQUE_ID) as cnt
FROM panel
WHERE (some criteria = 'x')
GROUP BY datepart(quarter,p.transaction_date)
but the count(distinct) will do a sort so it will take you a lot of time. so you can distinct it first in the table then do the count
SELECT datepart(quarter,p.transaction_date),
count(p.UNIQUE_ID) as cnt
FROM (select distinct transaction_date as transaction_date, UNIQUE_ID
from panel) as p
WHERE (some criteria = 'x')
GROUP BY datepart(quarter,p.transaction_date)
I'd use date_trunc:
select
date_trunc ('quarter', transaction_date), count (distinct unique_id)
from panel
where criteria = 'x'
group by 1
This presupposes that when you say "by quarter" that 1Q2015 is different than 1Q2014.
SELECT DATEPART(QUARTER, transaction_date) ,
COUNT(DISTINCT UNIQUE_ID),
FROM panel
GROUP BY transaction_date