Monthly average without aggregating on monthly level - sql

The table below is aggregating order_value
using data from the last 6 month, without creating an additional subquery I would like to calculate the monthly average of all orders over the last 6 month (see desired output table below).
Table
customer_id | order_timestamp | order_value
A 2020-01-01 10:30:51 100
Desired output table
customer_id | sum_orders | monthly_avg_orders
A 100.000 16.666
This is what I already have
select
customer_id,
sum(order_value)
from table
group by customer_id

Related

How to calculate average monthly number of some action in some perdion in Teradata SQL?

I have table in Teradata SQL like below:
ID trans_date
------------------------
123 | 2021-01-01
887 | 2021-01-15
123 | 2021-02-10
45 | 2021-03-11
789 | 2021-10-01
45 | 2021-09-02
And I need to calculate average monthly number of transactions made by customers in a period between 2021-01-01 and 2021-09-01, so client with "ID" = 789 will not be calculated because he made transaction later.
In the first month (01) were 2 transactions
In the second month was 1 transaction
In the third month was 1 transaction
In the nineth month was 1 transactions
So the result should be (2+1+1+1) / 4 = 1.25, isn't is ?
How can I calculate it in Teradata SQL? Of course I showed you sample of my data.
SELECT ID, AVG(txns) FROM
(SELECT ID, TRUNC(trans_date,'MON') as mth, COUNT(*) as txns
FROM mytable
-- WHERE condition matches the question but likely want to
-- use end date 2021-09-30 or use mth instead of trans_date
WHERE trans_date BETWEEN date'2021-01-01' and date'2021-09-01'
GROUP BY id, mth) mth_txn
GROUP BY id;
Your logic translated to SQL:
--(2+1+1+1) / 4
SELECT id, COUNT(*) / COUNT(DISTINCT TRUNC(trans_date,'MON')) AS avg_tx
FROM mytable
WHERE trans_date BETWEEN date'2021-01-01' and date'2021-09-01'
GROUP BY id;
You should compare to Fred's answer to see which is more efficent on your data.

Cumulative Sum Query in SQL table with distinct elements

I have a table like this, with column names as Date of Sale and insurance Salesman Names -
Date of Sale | Salesman Name | Sale Amount
2021-03-01 | Jack | 40
2021-03-02 | Mark | 60
2021-03-03 | Sam | 30
2021-03-03 | Mark | 70
2021-03-02 | Sam | 100
I want to do a group by, using the date of sale. The next column should display the cumulative count of the sellers who have made the sale till that date. But same sellers shouldn't be considered again.
For example,
The following table is incorrect,
Date of Sale | Count(Salesman Name) | Sum(Sale Amount)
2021-03-01 | 1 | 40
2021-03-02 | 3 | 200
2021-03-03 | 5 | 300
The following table is correct,
Date of Sale | Count(Salesman Name) | Sum(Sale Amount)
2021-03-01 | 1 | 40
2021-03-02 | 3 | 200
2021-03-03 | 3 | 300
I am not sure how to frame the SQL query, because there are two conditions involved here, cumulative count while ignoring the duplicates. I think the OVER clause along with the unbounded row preceding may be of some use here? Request your help
Edit - I have added the Sale Amount as a column. I need the cumulative sum for the Sales Amount also. But in this case , all the sale amounts should be considered unlike the salesman name case where only unique names were being considered.
One approach uses a self join and aggregation:
WITH cte AS (
SELECT t1.SaleDate,
COUNT(CASE WHEN t2.Salesman IS NULL THEN 1 END) AS cnt,
SUM(t1.SaleAmount) AS amt
FROM yourTable t1
LEFT JOIN yourTable t2
ON t2.Salesman = t1.Saleman AND
t2.SaleDate < t1.SaleDate
GROUP BY t1.SaleDate
)
SELECT
SaleDate,
SUM(cnt) OVER (ORDER BY SaleDate) AS NumSalesman,
SUM(amt) OVER (ORDER BY SaleDate) AS TotalAmount
FROM cte
ORDER BY SaleDate;
The logic in the CTE is that we try to find, for each salesman, an earlier record for the same salesman. If we can't find such a record, then we assume the record in question is the first appearance. Then we aggregate by date to get the counts per day, and finally take a rolling sum of counts in the outer query.
The best way to do this uses window functions to determine the first time a sales person appears. Then, you just want cumulative sums:
select saledate,
sum(case when seqnum = 1 then 1 else 0 end) over (order by saledate) as num_salespersons,
sum(sum(sales)) over (order by saledate) as running_sales
from (select t.*,
row_number() over (partition by salesperson order by saledate) as seqnum
from t
) t
group by saledate
order by saledate;
Note that this in addition to being more concise, this should have much, much better performance than a solution that uses a self-join.

Oracle SQL Help Data Totals

I am on Oracle 12c and need help with the simple query.
Here is the sample data of what I currently have:
Table Name: customer
Table DDL
create table customer(
customer_id varchar2(50),
name varchar2(50),
activation_dt date,
space_occupied number(50)
);
Sample Table Data:
customer_id name activation_dt space_occupied
abc abc-001 2016-09-12 20
xyz xyz-001 2016-09-12 10
Sample Data Output
The query I am looking for will provide the following:
customer_id name activation_dt space_occupied
abc abc-001 2016-09-12 20
xyz xyz-001 2016-09-12 10
Total_Space null null 30
Here is a slightly hack-y approach to this, using the grouping function ROLLUP(). Find out more.
SQL> select coalesce(customer_id, 'Total Space') as customer_id
2 , name
3 , activation_dt
4 , sum(space_occupied) as space_occupied
5 from customer
6 group by ROLLUP(customer_id, name, activation_dt)
7 having grouping(customer_id) = 1
8 or (grouping(name) + grouping(customer_id)+ grouping(activation_dt)) = 0;
CUSTOMER_ID NAME ACTIVATIO SPACE_OCCUPIED
------------ ------------ --------- --------------
abc abc-001 12-SEP-16 20
xyz xyz-001 12-SEP-16 10
Total Space 30
SQL>
ROLLUP() generates intermediate totals for each combination of column; the verbose HAVING clause filters them out and retains only the grand total.
What you want is a bit unusual, as if customer_id is integer, then you have to cast it to string etc, but it this is your requirement, then if be achieved this way.
SELECT customer_id,
name,
activation_dt,
space_occupied
FROM
(SELECT 1 AS seq,
customer_id,
name,
activation_dt,
space_occupied
FROM customer
UNION ALL
SELECT 2 AS seq,
'Total_Space' AS customer_id,
NULL AS name,
NULL AS activation_dt,
sum(space_occupied) AS space_occupied
FROM customer
)
ORDER BY seq
Explanation:
Inner query:
First part of union all; I added 1 as seq to give 1
hardcoded with your resultset from customer.
Second part of union
all: I am just calculating sum(space_occupied) and hardcoding other
columns, including 2 as seq
Outer query; Selecting the data
columns and order by seq, so Total_Space is returned at last.
Output
+-------------+---------+---------------+----------------+
| CUSTOMER_ID | NAME | ACTIVATION_DT | SPACE_OCCUPIED |
+-------------+---------+---------------+----------------+
| abc | abc-001 | 12-SEP-16 | 20 |
| xyz | xyz-001 | 12-SEP-16 | 10 |
| Total_Space | null | null | 30 |
+-------------+---------+---------------+----------------+
Seems like a great place to use group by grouping sets seems like this is what they were designed for. Doc link
SELECT coalesce(Customer_Id,'Total_Space') as Customer_ID
, Name
, ActiviatioN_DT
, sum(Space_occupied) space_Occupied
FROM customer
GROUP BY GROUPING SETS ((Customer_ID, Name, Activation_DT, Space_Occupied)
,())
The key thing here is we are summing space occupied. The two different grouping mechanisms tell the engine to keep each row in it's original form and 1 records with space_occupied summed; since we group by () empty set; only aggregated values will be returned; along with constants (coalesce hardcoded value for total!)
The power of this is that if you needed to group by other things as well you could have multiple grouping sets. imagine a material with a product division, group and line and I want a report with sales totals by division, group and line. You could simply group by () to get grand total, (product_division, Product_Group, line) to get a product line (product_Divsion, product_group) to get a product_group total and (product_division) to get a product Division total. pretty powerful stuff for a partial cube generation.

SQL Server: how to divide the result of sum of total for every customer id

I have 4 tables like this (you can ignore table B because this problem did not use that table)
I want to show the sum of 'total' for each 'sales_id' from table 'sales_detail'
What I want (the result) is like this:
sales_id | total
S01 | 3
S02 | 2
S03 | 4
S04 | 1
S05 | 2
S05 | 3
I have tried with this query:
select
sum(total)
from
sales_detail
where
sales_id = any (select sales_id
from sales
where customer_id = any (select customer_id
from customer)
)
but the query returns a value if 15 because they are the sum of those rows of data.
I have tried to use "distinct" before sum
and the result is [ 1, 2, 3 ] because those are distinct of those rows of data (not sum of each sales_id)
It's all about subquery
You are just so far off track that a simple comment won't help. Your query only concerns one table, sales_detail. It has nothing to do with the other two.
And, it is just an aggregation query:
select sd.sales_id, sum(sd.total)
from sales_detail sd
group by sd.sales_id;
This is actually pretty close to what the question itself is asking.

postgresql - how to calculate the percentage of number of entries in one table w.r.t to no of entries in another table

I have two tables(A & B) in my database(c). both tables contains two coulmns as shown below
column name Datatype
eventtime timestamp without time zone
serialnumber numeric
Table A contains total number of products (good and downgraded -- but in these table they are not defined as affected/downgraded) produced in each day. And table B contains total number of only downgraded products.
I want to make an quality process control chart using the percentage of downgraded products w.r.t total number of products produced (using serialnumber to join for example).
could some one tell me how can i get the percentage value for each day (also for each hour)
Use date_trunc() to group rows by desired period, e.g.:
select
a.r_date::date date,
downgraded,
total,
round(downgraded::numeric/total* 100, 2) percentage
from (
select date_trunc('day', eventtime) r_date, count(*) downgraded
from b
group by 1
) b
join (
select date_trunc('day', eventtime) r_date, count(*) total
from a
group by 1
) a
using (r_date)
order by 1;
date | downgraded | total | percentage
------------+------------+-------+------------
2015-05-05 | 3 | 4 | 75.00
2015-05-06 | 1 | 4 | 25.00
(2 rows)