Handling negative values with sql - sql

I have a data set that lists the date and quantity of future stock of products. Occasionally our demand outstrips our future supply and we wind up with a negative future quantity. I need to factor that future negative quantity into previous supply so we don't compound the problem by overselling our supply.
In the following data set, I need to prepare for demand on 10-19 by applying the negative quantity up the chain until i'm left with a positive quantity:
"ID","SKU","DATE","SEASON","QUANTITY"
"1","001","2012-06-22","S12","1656"
"2","001","2012-07-13","F12","1986"
"3","001","2012-07-27","F12","-283"
"4","001","2012-08-17","F12","2718"
"5","001","2012-08-31","F12","-4019"
"6","001","2012-09-14","F12","7212"
"7","001","2012-09-21","F12","782"
"8","001","2012-09-28","F12","2073"
"9","001","2012-10-12","F12","1842"
"10","001","2012-10-19","F12","-12159"
I need to get it to this:
"ID","SKU","DATE","SEASON","QUANTITY"
"1","001","2012-06-22","S12","1656"
"2","001","2012-07-13","F12","152"
I have looked at using a while loop as well as an outer apply but cannot seem to find a way to do this yet. Any help would be much appreciated. This would need to work for sql server 2008 R2.
Here's another example:
"1","002","2012-07-13","S12","1980"
"2","002","2012-08-10","F12","-306"
"3","002","2012-09-07","F12","826"
Would become:
"1","002","2012-07-13","S12","1674"
"3","002","2012-09-07","F12","826"

You don't seem to get a lot of answers - so here's something if you won't get the right 'how-to do it in pure SQL'. Ignore this solution if there's anything SQLish - it's just a defensive coding, not elegant.
If you want to get a sum of all data with same season why deleting duplicate records - just get it outside, run a foreach loop, sum all data with same season value, update table with the right values and delete unnecessary entries. Here's one of the ways to do it (pseudocode):
productsArray = SELECT * FROM products
processed = array (associative)
foreach product in productsArray:
if product[season] not in processed:
processed[season] = product[quantity]
UPDATE products SET quantity = processed[season] WHERE id = product[id]
else:
processed[season] = processed[season] + product[quantity]
DELETE FROM products WHERE id = product[id]

Here is a CROSS APPLY - tested
SELECT b.ID,SKU,b.DATE,SEASON,QUANTITY
FROM (
SELECT SKU,SEASON, SUM(QUANTITY) AS QUANTITY
FROM T1
GROUP BY SKU,SEASON
) a
CROSS APPLY (
SELECT TOP 1 b.ID,b.Date FROM T1 b
WHERE a.SKU = b.SKU AND a.SEASON = b.SEASON
ORDER BY b.ID ASC
) b
ORDER BY ID ASC

Related

SQL - join three tables based on (different) latest dates in two of them

Using Oracle SQL Developer, I have three tables with some common data that I need to join.
Appreciate any help on this!
Please refer to https://i.stack.imgur.com/f37Jh.png for the input and desired output (table formatting doesn't work on all tables).
These tables are made up in order to anonymize them, and in reality contain other data with millions of entries, but you could think of them as representing:
Product = Main product categories in a grocery store.
Subproduct = Subcategory products to the above. Each time the table is updated, the main product category may loses or get some new suproducts assigned to it. E.g. you can see that from May to June the Pulled pork entered while the Fishsoup was thrown out.
Issues = Status of the products, for example an apple is bad if it has brown spots on it..
What I need to find is: for each P_NAME, find the latest updated set of subproducts (SP_ID and SP_NAME), and append that information with the latest updated issue status (STATUS_FLAG).
Please note that each main product category gets its set of subproducts updated at individual occasions i.e. 1234 and 5678 might be "latest updated" on different dates.
I have tried multiple queries but failed each time. I am using combos of SELECT, LEFT OUTER JOIN, JOIN, MAX and GROUP BY.
Latest attempt, which gives me the combo of the first two tables, but missing the third:
SELECT
PRODUCT.P_NAME,
SUBPRODUCT.SP_PRODUCT_ID, SUBPRODUCT.SP_NAME, SUBPRODUCT.SP_ID, SUPPRODUCT.SP_VALUE_DATE
FROM SUBPRODUCT
LEFT OUTER JOIN PRODUCT ON PRODUCT.P_ID = SUBPRODUCT.SP_PRODUCT_ID
JOIN(SELECT SP_PRODUCT_ID, MAX(SP_VALUE_DATE) AS latestdate FROM SUBPRODUCT GROUP BY SP_PRODUCT_ID) sub ON
sub.SP_PRODUCT_ID = SUBPRODUCT.SP_PRODUCT_ID AND sub.latestDate = SUBPRODUCT.SP_VALUE_DATE;
Trying to find a row with a max value is a common SQL pattern - you can do it with a join, like your example, but it's usually more clear to use a subquery or a window function.
Correlated subquery example
select
PRODUCT.P_NAME,
SUBPRODUCT.SP_PRODUCT_ID, SUBPRODUCT.SP_NAME, SUBPRODUCT.SP_ID, SUPPRODUCT.SP_VALUE_DATE,
ISSUES.STATUS_FLAG, ISSUES.STATUS_LAST_UPDATED
from PRODUCT
join SUBPRODUCT
on PRODUCT.P_ID = SUBPRODUCT.SP_PRODUCT_ID
and SUBPRODUCT.SP_VALUE_DATE = (select max(S2.SP_VALUE_DATE) as latestDate
from SUBPRODUCT S2
where S2.SP_PRODUCT_ID = SUBPRODUCT.SP_PRODUCT_ID)
join ISSUES
on ISSUES.ISSUE_ID = SUBPRODUCT.SP_ID
and ISSUES.STATUS_LAST_UPDATED = (select max(I2.STATUS_LAST_UPDATED) as latestDate
from ISSUES I2
where I2.ISSUE_ID = ISSUES.ISSUE_ID)
Window function / inline view example
select
PRODUCT.P_NAME,
S.SP_PRODUCT_ID, S.SP_NAME, S.SP_ID, S.SP_VALUE_DATE,
I.STATUS_FLAG, I.STATUS_LAST_UPDATED
from PRODUCT
join (select SUBPRODUCT.*,
max(SP_VALUE_DATE) over (partition by SP_PRODUCT_ID) as latestDate
from SUBPRODUCT) S
on PRODUCT.P_ID = S.SP_PRODUCT_ID
and S.SP_VALUE_DATE = S.latestDate
join (select ISSUES.*,
max(STATUS_LAST_UPDATED) over (partition by ISSUE_ID) as latestDate
from ISSUES) I
on I.ISSUE_ID = S.SP_ID
and I.STATUS_LAST_UPDATED = I.latestDate
This often performs a bit better, but window functions can be tricky to understand.

Summing Data on Two Rows with a Similar Identifier

I'm attempting to sum a column 'Quantity' in our database whenever an Item ID is either itself or if its itself with a zero in front (whether the ID is 2447 or 02447 as an example in this case):
I started with the following to get the sums of the IDs:
SELECT
"TJ_TransactionJournalDetail"."TLI_ScanCode" As LineItemID,
"TJ_StockInventory"."INV_ScanCode" As InventoryID,
"TJ_TransactionJournalDetail"."TLI_ReceiptAlias" As ReceiptAlias,
"TJ_StockInventory"."INV_Name" As ItemName,
"TJ_TransactionJournalDetail"."TLI_LIT_FK" As LineDiscount,
SUM("TJ_TransactionJournalDetail"."TLI_Quantity") AS Quantity
FROM "TJ_StockInventory" LEFT OUTER JOIN "TJ_TransactionJournalDetail" ON "TJ_StockInventory"."INV_PK" = "TJ_TransactionJournalDetail"."INV_PK"
WHERE ecrs.TJ_TransactionJournalDetail.TLI_StartTime > '2020-01-17 00:00:00.000'
AND ecrs.TJ_TransactionJournalDetail.TLI_EndTime < '2020-01-19 23:59:59.999'
AND INV_DPT_FK = 49
AND "TJ_TransactionJournalDetail"."TLI_LIT_FK" = 1
GROUP BY LineItemID,InventoryID,ReceiptAlias,ItemName,LineDiscount
ORDER BY InventoryID;
These are the results the quantity of which I'm attempting to combine:
LineItemID,InventoryID,ReceiptAlias,ItemName,LineDiscount,Quantity
'2447','02447 ','DELI-BEAR CLAW EA','Bear Claw',1,1.0000
'02447','02447 ','DELI-BEAR CLAW EA','Bear Claw',1,30.0000
What I'm looking for:
'2447','02447 ','DELI-BEAR CLAW EA','Bear Claw',1,31.0000
-or-
'02447','02447 ','DELI-BEAR CLAW EA','Bear Claw',1,31.0000
as long as the quantity is 31.
I basically want to combine the quantity of two rows if the LineItemID is the same as "0" concatenated with the LineItemID on another line. Or another way of possibly accomplishing it would be to combine all items with the same InventoryID, but that is in the StockInventory table, not the TransactionJournal table which has the quantity that I'm summing.
And have tried a number of solutions, first I tried a CASE statement but couldn't figure out how to apply it across rows:
....
SUM (
CASE WHEN ( "TJ_StockInventory"."INV_ScanCode" = STRING('0',"TJ_TransactionJournalDetail"."TLI_ScanCode",' ') )
THEN "TJ_TransactionJournalDetail"."TLI_Quantity"
ELSE 0 END
) AS Quantity
FROM "TJ_StockInventory" LEFT OUTER JOIN
....
I also tried partitioning by ItemID to combine the quantity when the InventoryIDs were the same:
....
SUM("TJ_TransactionJournalDetail"."TLI_Quantity") over (partition by "TJ_StockInventory"."INV_ScanCode") AS Quantity
....
But neither of these solutions worked. I chose to "else 0" the case statement just to narrow it down to that one item, but in all cases it kept the lines separate and did not combine the quantities.
I've looked at a number of tutorials but none seem to deal with this specific case, and I haven't found anything that was hinting at a solution for this. That being said, I have difficulty wrapping my mind around database programming at times and am open to the idea that I may be approaching this in the incorrect way.
A couple of pseudocode examples of what I'm looking for
if LineItemID of this row = CONCAT('0', LineItemID) of another row
then sum the quantities of those rows
-or-
if InventoryID of this row and InventoryID of another row are equal
then sum them even if the LineItemIDs are different
Any help, pointers or directions to examples or docs online that could help with this would be great!
Thank you!
You seem to want to remove "LineItemID" from the aggregation:
SELECT MAX(tjd."TLI_ScanCode") As LineItemID,
si."INV_ScanCode" As InventoryID,
tjd."TLI_ReceiptAlias" As ReceiptAlias,
si."INV_Name" As ItemName,
tjd."TLI_LIT_FK" As LineDiscount,
SUM(tjd."TLI_Quantity") AS Quantity
FROM "TJ_StockInventory" si LEFT OUTER JOIN
"TJ_TransactionJournalDetail" tjd
ON si."INV_PK" = tjd."INV_PK"
WHERE tjd.TLI_StartTime > '2020-01-17' AND
tjd.TLI_EndTime < '2020-01-20' AND
INV_DPT_FK = 49 AND
tjd."TLI_LIT_FK" = 1
GROUP BY InventoryID, ReceiptAlias, ItemName, LineDiscount
ORDER BY InventoryID;

How to use SUM in this situation?

I have the following tables below and their schema:
INV
id, product code, name, ucost, tcost, desc, type, qoh
1,123,CPASS 700,1.00,5.00,CPASS 700 Lorem, COM,5
2,456,Shelf 5,2.00,6.00,Shelf 5 KJ, BR,3
GRP
id,type,desc
1,COM,COMPASS
2,BR,SHELF
Currently I have a query like this:
SELECT INV.*,GRP.DESCR AS CATEGORY
FROM INV LEFT JOIN GRP ON INV.TYPE = GRP.TYPE
WHERE INV.QOH = 0
There is no problems with that query.
Right now,I want to know the SUM of the TCOST of every INV record where their QOH is 0.
In this situation, does that I mean all I have to do is to write a separate query like the one below:
SELECT SUM(TCOST)
FROM INV
WHERE QOH = 0
Does it make any sense for me to try and combine those two queries as one ?
First understand that SUM is the aggregate function hence either you can run the Query like
(SELECT SUM(TCOST) FROM INV WHERE QOH=0) as total
This will return Sum of TCOST in INV Table for mentioned condition.
Another approach is finding the Sum based on the some column (e.g. Type)
you could write query like
SELECT Type , SUM(TCOST) FROM INV WHERE QOH=0 GROUP BY type ;
Its not clear on what criteria you want to sum . But I think above two approaches would provide you fare idea .
Mmm, you could maybe use a correlated query, though i'm not sure it's the best approach since I'm not sure I understand what your attempting to do:
SELECT INV.*,
GRP.DESCR AS CATEGORY ,
(SELECT SUM(TCOST) FROM INV WHERE QOH=0) as your_sum
FROM INV LEFT JOIN GRP ON INV.TYPE = GRP.TYPE
WHERE INV.QOH = 0
If you want only one value for the sum(), then your query is fine. If you want a new column with the sum, then use window functions:
SELECT INV.*, GRP.DESCR AS CATEGORY,
SUM(INV.TCOST) OVER () as sum_at_zero
FROM INV LEFT JOIN
GRP
ON INV.TYPE = GRP.TYPE
WHERE INV.QOH = 0;
It does not make sense to combine the queries by adding a row to the first one, because the columns are very different. A SQL result set requires that all rows have the same columns.

SQL Server - Need to SUM values in across multiple returned records

In the following query I am trying to get TotalQty to SUM across both the locations for item 6112040, but so far I have been unable to make this happen. I do need to keep both lines for 6112040 separate in order to capture the different location.
This query feeds into a Jasper ireport using something called Java.Groovy. Despite this, none of the PDFs printed yet have been either stylish or stained brown. Perhaps someone could address that issue as well, but this SUM issue takes priority
I know Gordon Linoff will get on in about an hour so maybe he can help.
DECLARE #receipt INT
SET #receipt = 20
SELECT
ent.WarehouseSku AS WarehouseSku,
ent.PalletId AS [ReceivedPallet],
ISNULL(inv.LocationName,'') AS [ActualLoc],
SUM(ISNULL(inv.Qty,0)) AS [LocationQty],
SUM(ISNULL(inv.Qty,0)) AS [TotalQty],
MAX(CAST(ent.ReceiptLineNumber AS INT)) AS [LineNumber],
MAX(ent.WarehouseLotReference) AS [WarehouseLot],
LEFT(SUM(ent.WeightExpected),7) AS [GrossWeight],
LEFT(SUM(inv.[Weight]),7) AS [NetWeight]
FROM WarehouseReceiptDetail AS det
INNER JOIN WarehouseReceiptDetailEntry AS ent
ON det.ReceiptNumber = ent.ReceiptNumber
AND det.FacilityName = ent.FacilityName
AND det.WarehouseName = ent.WarehouseName
AND det.ReceiptLineNumber = ent.ReceiptLineNumber
LEFT OUTER JOIN Inventory AS inv
ON inv.WarehouseName = det.WarehouseName
AND inv.FacilityName = det.FacilityName
AND inv.WarehouseSku = det.WarehouseSku
AND inv.CustomerLotReference = ent.CustomerLotReference
AND inv.LotReferenceOne = det.ReceiptNumber
AND ISNULL(ent.CaseId,'') = ISNULL(inv.CaseId,'')
WHERE
det.WarehouseName = $Warehouse
AND det.FacilityName = $Facility
AND det.ReceiptNumber = #receipt
GROUP BY
ent.PalletId
, ent.WarehouseSku
, inv.LocationName
, inv.Qty
, inv.LotReferenceOne
ORDER BY ent.WarehouseSku
The lines I need partially coalesced are 4 and 5 in the above return.
Create a second dataset with a subquery and join to that subquery - you can extrapolate from the following to apply to your situation:
First the Subquery:
SELECT
WarehouseSku,
SUM(Qty)
FROM
Inventory
GROUP BY
WarehouseSku
Now apply to your query - insert into the FROM clause:
...
LEFT JOIN (
SELECT
WarehouseSKU,
SUM(Qty)
FROM
Inventory
GROUP BY
WarehouseSKU
) AS TotalQty
ON Warehouse.WarehouseSku = TotalQty.WarehouseSku
Without seeing the actual schema DDL it is hard to know the exact cardinality, but I think this will point you in the right direction.

SQL Filtering duplicate rows due to bad ETL

The database is Postgres but any SQL logic should help.
I am retrieving the set of sales quotations that contain a given product within the bill of materials. I'm doing that in two steps: step 1, retrieve all DISTINCT quote numbers which contain a given product (by product number).
The second step, retrieve the full quote, with all products listed for each unique quote number.
So far, so good. Now the tough bit. Some rows are duplicates, some are not. Those that are duplicates (quote number & quote version & line number) might or might not have maintenance on them. I want to pick the row that has maintenance greater than 0. The duplicate rows I want to exclude are those that have a 0 maintenance. The problem is that some rows, which have no duplicates, have 0 maintenance, so I can't just filter on maintenance.
To make this exciting, the database holds quotes over 20+ years. And the data scientists guys have just admitted that maybe the ETL process has some bugs...
--- step 0
--- cleanup the workspace
SET CLIENT_ENCODING TO 'UTF8';
DROP TABLE IF EXISTS product_quotes;
--- step 1
--- get list of Product Quotes
CREATE TEMPORARY TABLE product_quotes AS (
SELECT DISTINCT master_quote_number
FROM w_quote_line_d
WHERE item_number IN ( << model numbers >> )
);
--- step 2
--- Now join on that list
SELECT
d.quote_line_number,
d.item_number,
d.item_description,
d.item_quantity,
d.unit_of_measure,
f.ref_list_price_amount,
f.quote_amount_entered,
f.negtd_discount,
--- need to calculate discount rate based on list price and negtd discount (%)
CASE
WHEN ref_list_price_amount > 0
THEN 100 - (ref_list_price_amount + negtd_discount) / ref_list_price_amount *100
ELSE 0
END AS discount_percent,
f.warranty_months,
f.master_quote_number,
f.quote_version_number,
f.maintenance_months,
f.territory_wid,
f.district_wid,
f.sales_rep_wid,
f.sales_organization_wid,
f.install_at_customer_wid,
f.ship_to_customer_wid,
f.bill_to_customer_wid,
f.sold_to_customer_wid,
d.net_value,
d.deal_score,
f.transaction_date,
f.reporting_date
FROM w_quote_line_d d
INNER JOIN product_quotes pq ON (pq.master_quote_number = d.master_quote_number)
INNER JOIN w_quote_f f ON
(f.quote_line_number = d.quote_line_number
AND f.master_quote_number = d.master_quote_number
AND f.quote_version_number = d.quote_version_number)
WHERE d.net_value >= 0 AND item_quantity > 0
ORDER BY f.master_quote_number, f.quote_version_number, d.quote_line_number
The logic to filter the duplicate rows is like this:
For each master_quote_number / version_number pair, check to see if there are duplicate line numbers. If so, pick the one with maintenance > 0.
Even in a CASE statement, I'm not sure how to write that.
Thoughts? The database is Postgres but any SQL logic should help.
I think you will want to use Window Functions. They are, in a word, awesome.
Here is a query that would "dedupe" based on your criteria:
select *
from (
select
* -- simplifying here to show the important parts
,row_number() over (
partition by master_quote_number, version_number
order by maintenance desc) as seqnum
from w_quote_line_d d
inner join product_quotes pq
on (pq.master_quote_number = d.master_quote_number)
inner join w_quote_f f
on (f.quote_line_number = d.quote_line_number
and f.master_quote_number = d.master_quote_number
and f.quote_version_number = d.quote_version_number)
) x
where seqnum = 1
The use of row_number() and the chosen partition by and order by criteria guarantee that only ONE row for each combination of quote_number/version_number will get the value of 1, and it will be the one with the highest value in maintenance (if your colleagues are right, there would only be one with a value > 0 anyway).
Can you do something like...
select
*
from
w_quote_line_d d
inner join
(
select
...
,max(maintenance)
from
w_quote_line_d
group by
...
) d1
on
d1.id = d.id
and d1.maintenance = d.maintenance;
Am I understanding your problem correctly?
Edit: Forgot the group by!
I'm not sure, but maybe you could Group By all other columns and use MAX(Maintenance) to get only the greatest.
What do you think?