I have a MS Access table tracking quantities of products at end month as below.
I need to generate the latest quantity for a specified ProductId at a specified date e.g.
The Quantity for ProductId 1 on 15-Feb-12 is 100, The Quantity for ProductId 1 on 15-Mar-12 is 150.
ProductId | ReportingDate | Quantity|
1 | 31-Jan-12 | 100 |
2 | 31-Jan-12 | 200 |
1 | 28-Feb-12 | 150 |
2 | 28-Feb-12 | 250 |
1 | 31-Mar-12 | 180 |
2 | 31-Mar-12 | 280 |
My SQL statement below bring all previous values instead the latest one only. Could anyone assist me troubleshoot the query.
SELECT Sheet1.ProductId, Max(Sheet1.ReportingDate) AS MaxOfReportingDate, Sheet1.Quantity
FROM Sheet1
GROUP BY Sheet1.ProductId, Sheet1.Quantity, Sheet1.ReportingDate, Sheet1.ProductId
HAVING (((Sheet1.ReportingDate)<#3/15/2012#) AND ((Sheet1.ProductId)=1))
Here's #naveen's idea:
SELECT TOP 1 Sheet1.ProductId, Sheet1.ReportingDate AS MaxOfReportingDate, Sheet1.Quantity
FROM Sheet1
WHERE (Sheet1.ProductId = 1)
AND (Sheet1.ReportingDate < #2012/03/15#)
ORDER BY Sheet1.ReportingDate DESC
Although note that MsAccess selects top with ties, so this won't work if you have more than one row per ReportingDate, ProductId combo. (But at the same time, this means that the data isn't deterministic anyway)
Edit - I meant that if you have a contradiction in your data like below, you'll get 2 rows back.
ProductId | ReportingDate | Quantity|
1 | 31-Jan-12 | 100
1 | 31-Jan-12 | 200
Related
I am trying to sum all the columns that have the same ID number in a specified date range, but it always gives me duplicated values
select pr.product_sku,
pr.product_name,
pr.brand,
pr.category_name,
pr.subcategory_name,
a.stock_on_hand,
sum(pr.pageviews) as page_views,
sum(acquired_subscriptions) as acquired_subs,
sum(acquired_subscription_value) as asv_value
from dwh.product_reporting pr
join dm_product.product_data_livefeed a
on pr.product_sku = a.product_sku
where pr.fact_day between '2022-05-01' and '2022-05-30' and pr.pageviews > '0' and pr.acquired_subscription_value > '0' and store_id = 1
group by pr.product_sku,
pr.product_name,
pr.brand,
pr.category_name,
pr.subcategory_name,
a.stock_on_hand;
This supposes to give me:
Sum of all KPI values for a distinct product SKU
Example table:
| Date | product_sku |page_views|number_of_subs
|------------|-------------|----------|--------------|
| 2022-01-01 | 1 | 110 | 50 |
| 2022-01-25 | 2 | 1000 | 40 |
| 2022-01-20 | 3 | 2000 | 10 |
| 2022-01-01 | 1 | 110 | 50 |
| 2022-01-25 | 2 | 1000 | 40 |
| 2022-01-20 | 3 | 2000 | 10 |
Expected Output:
| product_sku |page_views|number_of_subs
|-------------|----------|--------------|
| 1 | 220 | 100 |
| 2 | 2000 | 80 |
| 3 | 4000 | 20 |
Sorry I had to edit to add the table examples
Since you're not listing the dupes (assuming they are truly appearing as duplicate rows, and not just multiple rows with different values), I'll offer that there may be something else that's at play here - I would suggest for every string value in your result set that's part of the GROUP BY clause to apply a TRIM(UPPER()) as you might be dealing with either a case insensitivity or trailing blanks that are treated as unique values in the query.
Assuming all the columns are character based:
select trim(upper(pr.product_sku)),
trim(upper(pr.product_name)),
trim(upper(pr.brand)),
trim(upper(pr.category_name)),
trim(upper(pr.subcategory_name)),
sum(pr.pageviews) as page_views,
sum(acquired_subscriptions) as acquired_subs,
sum(acquired_subscription_value) as asv_value
from dwh.product_reporting pr
where pr.fact_day between '2022-05-01' and '2022-05-30' and pr.pageviews > '0' and pr.acquired_subscription_value > '0' and store_id = 1
group by trim(upper(pr.product_sku)),
trim(upper(pr.product_name)),
trim(upper(pr.brand)),
trim(upper(pr.category_name)),
trim(upper(pr.subcategory_name));
Thank you guys for all your help, I found out where the problem was. It was mainly in the group by when I removed all the other column names and left only the product_sku column, it worked as required
I have a problem.
I have a result query with order numbers item numbers and different quantities for each item.
I want to distinct all item numbers and count all quantities for each specific item number.
Here is an example table (Query output):
| OrderNo | ItemNo | Qty |
--------------------------------
| XY123 | 3000 | 4 |
| XY123 | 2000 | 2 |
| ZZ999 | 3000 | 6 |
| ZZ999 | 1000 | 3 |
| PP333 | 1000 | 5 |
The distinct values for all sold items with their item numbers would be:
1000 -> Count/Sum the Qty
2000 -> Count/Sum the Qty
3000 -> Count/Sum the Qty
Result:
| ItemNo | QtyTotal |
-------------------------
| 1000 | 8 |
| 2000 | 2 |
| 3000 | 10 |
My problem is, when I DISTINCT the ItemNo, i dont know how to SUM their corresponding quantities before. I need some advice please.
You can use group by:
select ItemNo, sum(Qty) as QtyTotal
from QueryOutput q
group by ItemNo;
You can replace QueryOutput with a query that produces your example table.
Fiddle
I need to subtract a value, found in a different table, from values across different rows.
For example, the tables I have are:
ProductID | Warehouse | Locator | qtyOnHand
-------------------------------------------
100 | A | 123 | 12
100 | A | 124 | 12
100 | A | 124 | 8
101 | A | 126 | 6
101 | B | 127 | 12
ProductID | Sold
----------------
100 | 26
101 | 16
Result:
ProductID | Warehouse | Locator | qtyOnHand | available
-------------------------------------------------------
100 | A | 123 | 12 | 0
100 | A | 123 | 12 | 0
100 | A | 124 | 8 | 6
101 | A | 126 | 6 | 0
101 | B | 127 | 12 | 12
The value should only be subtracted from those in warehouse A.
Im using postgresql. Any help is much appreciated!
If I understand correctly, you want to compare the overall stock to the cumulative amounts in the first table. The rows in the first table appear to be ordered from largest to smallest. Note: This is an interpretation and not 100% consistent with the data in the question.
Use JOIN to bring the data together and then cumulative sums and arithmetic:
select t1.*,
(case when running_qoh < t2.sold then 0
when running_qoh - qtyOnHand < t2.sold then (running_qoh - t2.sold)
else qtyOnHand
end) as available
from (select t1.*,
sum(qtyOnHand) over (partition by productID order by qtyOnHand desc) as running_qoh
from table1 t1
) t1 join
table2 t2
using (ProductID)
I have a table supplier_account which has five coloumns supplier_account_id(pk),supplier_id(fk),voucher_no,debit and credit. I want to get the sum of debit grouped by supplier_id and then subtract the value of credit of the rows in which voucher_no is not null. So for each subsequent rows the value of sum of debit gets reduced. I have tried using 'with' clause.
with debitdetails as(
select supplier_id,sum(debit) as amt
from supplier_account group by supplier_id
)
select acs.supplier_id,s.supplier_name,acs.purchase_voucher_no,acs.purchase_voucher_date,dd.amt-acs.credit as amount
from supplier_account acs
left join supplier s on acs.supplier_id=s.supplier_id
left join debitdetails dd on acs.supplier_id=dd.supplier_id
where voucher_no is not null
But here the debit value will be same for all rows. After subtraction in the first row I want to get the result in second row and subtract the next credit value from that.
I know it is possible by using temporary tables. The problem is I cannot use temporary tables because the procedure is used to generate reports using Jasper Reports.
What you need is an implementation of the running total. The easiest way to do it with a help of a window function:
with debitdetails as(
select id,sum(debit) as amt
from suppliers group by id
)
select s.id, purchase_voucher_no, dd.amt, s.credit,
dd.amt - sum(s.credit) over (partition by s.id order by purchase_voucher_no asc)
from suppliers s
left join debitdetails dd on s.id=dd.id
order by s.id, purchase_voucher_no
SQL Fiddle
Results:
| id | purchase_voucher_no | amt | credit | ?column? |
|----|---------------------|-----|--------|----------|
| 1 | 1 | 43 | 5 | 38 |
| 1 | 2 | 43 | 18 | 20 |
| 1 | 3 | 43 | 8 | 12 |
| 2 | 4 | 60 | 5 | 55 |
| 2 | 5 | 60 | 15 | 40 |
| 2 | 6 | 60 | 30 | 10 |
My question is similar to this question SQL Group By Having Where Statements , the only difference is I need to generate the latest quantities for all the ProductIds at a specified date e.g.
The Quantity for ProductId 1 on 15-Feb-12 is 100 & ProductId 2 on 15-Feb-12 is 200,The Quantity for ProductId 1 on 15-Mar-12 is 150 & ProductId 2 on 15-Mar-12 is 250. from a MS Access table tracking quantities of products at end month as below.
ProductId | ReportingDate | Quantity|
1 | 31-Jan-12 | 100 |
2 | 31-Jan-12 | 200 |
1 | 28-Feb-12 | 150 |
2 | 28-Feb-12 | 250 |
1 | 31-Mar-12 | 180 |
2 | 31-Mar-12 | 280 |
My desired output on March 15 2012 should be a query as below:
ProductId | ReportingDate | Quantity|
1 | 28-Feb-12 | 150 |
2 | 28-Feb-12 | 250 |
My current SQL statement below returns the result of only one ProductID. Could anyone assist me expand the query to show all ProductIds?
SELECT TOP 1 Sheet1.ProductId, Sheet1.ReportingDate AS MaxOfReportingDate, Sheet1.Quantity
FROM Sheet1
WHERE (Sheet1.ProductId = 1)
AND (Sheet1.ReportingDate < #2012/03/15#)
ORDER BY Sheet1.ReportingDate DESC
you need to join with a sub-query that lists the latest date up to the desired date:
SELECT
Sheet1.ProductId, Sheet1.ReportingDate, Sheet1.Quantity
FROM Sheet1 INNER JOIN
(SELECT Sheet1.ProductId, MAX(Sheet1.ReportingDate) AS MaxOfReportingDate
FROM Sheet1
WHERE ReportingDate < #2012/03/15#
GROUP BY ProductId) AS a
ON Sheet1.ProductId = a.ProductId AND Sheet1.ReportingDate = a.MaxOfReportingDate
ORDER BY Sheet1.ReportingDate DESC