In the below example the item code is split into 4 different lots (different versions of the same product)
the Item has 780 items allocated to orders across all lots. however the first lot only has 207 available i need another column to work out how many units are available using the oldest lot first eg the first lot in the example would be used up so would the 2nd and 3rd and there would be 382 units available from the final lot. im not too sure how to write this in sql. There are many more products in the dataset some with more and some with less lots.
Any Help would be apricated
Select
s.[Item Code]
,s.Lot
,s.[Allocated to Orders]
,s.[Available QOH]
from #StockValuation1 s
where s.[Item Code] = 'Test12080'
Desired outcome -
My solution utilises a window function to capture the running total as at the current allocation, which is then used to calculate the allocation per Lot and the remaining:
declare #t table(ItemCode int,Lot int,Allocated int, Available int);
insert into #t values
(1,1,780,207)
,(1,2,780,400)
,(1,3,780,55)
,(1,4,780,500)
,(1,5,780,100)
,(2,1,430,270)
,(2,2,430,140)
,(2,3,430,150)
,(2,4,430,50)
,(2,5,430,100)
;
with rt as
(
select ItemCode
,Lot
,Allocated
,Available
,case when rt >= Allocated
then Allocated - (rt - Available)
else rt - (rt - Available)
end as LotAllocation
from (select *
,sum(Available) over (partition by ItemCode order by Lot) as rt
from #t
) as t
)
select ItemCode
,Lot
,Allocated
,Available
,case when LotAllocation < 0
then 0
else LotAllocation
end as LotAllocation
,case when LotAllocation < 0
then Available
else Available - LotAllocation
end as AvailableLessAllocation
from rt
order by ItemCode
,Lot;
Output:
+----------+-----+-----------+-----------+---------------+-------------------------+
| ItemCode | Lot | Allocated | Available | LotAllocation | AvailableLessAllocation |
+----------+-----+-----------+-----------+---------------+-------------------------+
| 1 | 1 | 780 | 207 | 207 | 0 |
| 1 | 2 | 780 | 400 | 400 | 0 |
| 1 | 3 | 780 | 55 | 55 | 0 |
| 1 | 4 | 780 | 500 | 118 | 382 |
| 1 | 5 | 780 | 100 | 0 | 100 |
| 2 | 1 | 430 | 270 | 270 | 0 |
| 2 | 2 | 430 | 140 | 140 | 0 |
| 2 | 3 | 430 | 150 | 20 | 130 |
| 2 | 4 | 430 | 50 | 0 | 50 |
| 2 | 5 | 430 | 100 | 0 | 100 |
+----------+-----+-----------+-----------+---------------+-------------------------+
You can use cumulative sums. First get the amount allocated to the orders:
select s.*,
(case when allocated_to_orders >= running_qoh
then available_qoh
when allocated_to_orders <= running_qoh - available_qoh
then running_qoh - allocated_to_orders
else 0
end) as used_in_orders
from (select s.*,
sum(available_qoh) over (partition by item_code order by lot) as running_qoh
from #StockValuation1 s
) s
where s.[Item Code] = 'Test12080';
Then use a subquery or CTE to get the difference:
select s.*,
(available_qoh - used_in_orders) as available_for_orders
from (select s.*,
(case when allocated_to_orders >= running_qoh
then available_qoh
when allocated_to_orders <= running_qoh - available_qoh
then running_qoh - allocated_to_orders
else 0
end) as used_in_orders
from (select s.*,
sum(available_qoh) over (partition by item_code order by lot) as running_qoh
from #StockValuation1 s
) s
where s.[Item Code] = 'Test12080'
) s;
Note: I strongly recommend that you stop using spaces in your column names so you don't have escape them. The escape characters just make queries harder to write, read, and maintain.
Related
I have data as follows
+----+------+--------+
| ID | Code | Weight |
+----+------+--------+
| 1 | M | 200 |
| 1 | 2A | 50 |
| 1 | 2B | 50 |
| 2 | | 350 |
| 2 | M | 350 |
| 2 | 3A | 120 |
| 2 | 3B | 120 |
| 3 | 5A | 100 |
| 4 | | 200 |
| 4 | | 100 |
+----+------+--------+
For ID 1 the max weight is 200, I want to subtract sum of all weights from ID 1 except the max value that is 200.
There might be a case when there are 2 rows containing max values for same id. Example for ID 2 we have 2 rows containing max value i.e. 350 . In such scenario I want to sum all values except the max value. But I would mark weight 0 for 1 of the 2 rows containing max value. That row would be the one where Code is NULL/Blank.
Case where there is only 1 row for an ID the row would be kept as is.
Another scenario could be one where there is only row containing max weight but Code is NULL/Blank in such case we would simply do what we did for ID 1. Sum all values except max value and subtract from row containing max value.
Desired Output
+----+------+--------+---------------+
| ID | Code | Weight | Actual Weight |
+----+------+--------+---------------+
| 1 | M | 200 | 100 |
| 1 | 2A | 50 | 50 |
| 1 | 2B | 50 | 50 |
| 2 | | 350 | 0 |
| 2 | M | 350 | 110 |
| 2 | 3A | 120 | 120 |
| 2 | 3B | 120 | 120 |
| 3 | 5A | 100 | 100 |
| 4 | | 200 | 100 |
| 4 | | 100 | 100 |
+----+------+--------+---------------+
I want to create column Actual Weight as shown above. I can't find a way to apply partition by excluding max value and create column Actual Weight.
dense_rank() to identify the row with max weight, dr = 1 is rows with max weight
row_number() to differentiate the max weight row for Code = blank from others
with cte as
(
select *,
dr = dense_rank() over (partition by ID order by [Weight] desc),
rn = row_number() over (partition by ID order by [Weight] desc, Code desc)
from tbl
)
select *,
ActWeight = case when dr = 1 and rn <> 1
then 0
when dr = 1 and rn = 1
then [Weight]
- sum(case when dr <> 1 then [Weight] else 0 end) over (partition by ID)
else [Weight]
end
from cte
dbfiddle demo
Hmmm . . . I think you just want window functions and conditional logic:
select t.*,
(case when 1 = row_number() over (partition by id order by weight desc, (case when code <> '' then 2 else 1 end))
then weight - sum(case when weight <> max_weight then weight else 0 end) over (partition by id)
else weight
end) as actual_weight
from (select t.*,
max(weight) over (partition by id, code) as max_weight
from t
) t
I need to subtract a value, found in a different table, from values across different rows.
For example, the tables I have are:
ProductID | Warehouse | Locator | qtyOnHand
-------------------------------------------
100 | A | 123 | 12
100 | A | 124 | 12
100 | A | 124 | 8
101 | A | 126 | 6
101 | B | 127 | 12
ProductID | Sold
----------------
100 | 26
101 | 16
Result:
ProductID | Warehouse | Locator | qtyOnHand | available
-------------------------------------------------------
100 | A | 123 | 12 | 0
100 | A | 123 | 12 | 0
100 | A | 124 | 8 | 6
101 | A | 126 | 6 | 0
101 | B | 127 | 12 | 12
The value should only be subtracted from those in warehouse A.
Im using postgresql. Any help is much appreciated!
If I understand correctly, you want to compare the overall stock to the cumulative amounts in the first table. The rows in the first table appear to be ordered from largest to smallest. Note: This is an interpretation and not 100% consistent with the data in the question.
Use JOIN to bring the data together and then cumulative sums and arithmetic:
select t1.*,
(case when running_qoh < t2.sold then 0
when running_qoh - qtyOnHand < t2.sold then (running_qoh - t2.sold)
else qtyOnHand
end) as available
from (select t1.*,
sum(qtyOnHand) over (partition by productID order by qtyOnHand desc) as running_qoh
from table1 t1
) t1 join
table2 t2
using (ProductID)
I have an sql script below.
SELECT
InvoiceNo
,InvoiceType
,Amount
,OrderAmount
,ShippingAmount
,TruckTaxAmount
,PreShippingAmount
FROM truckdb AS t1
INNER JOIN truckdetails AS t2 ON tl.truckdetail = t2.truckid
WHERE [shipping date] > = '01-01-2011'
And sample data
+--------+-------------+---------+-------------+----------------+------------+----------+
| InvNo | InvoiceType | Amount | OrderAmount | ShippingAmount | TruckTxAmt | PreShAmt |
+--------+-------------+---------+-------------+----------------+------------+----------+
| 001 | ckt | 1200 | 544 | 666 | 23 | 11 |
| 002 | tkp | 1300 | 544 | 133 | 11 | 11 |
| 009 | ckt | 1222 | 221 | 122 | 221 | 566 |
+--------+-------------+---------+-------------+----------------+------------+----------+
I have several invoice types. I want to show one particular CKT InvoiceType - Amount, OrderAmount, ShippingAmount, TruckTaxAmount in negative. I tried to multiply using when statement after where clause. But something is wrong.
You need to use the CASE WHEN in the SELECT clause, not after the WHERE clause:
SELECT
...
CASE WHEN InvoiceType='CKT' THEN Amount * -1.00 ELSE Amount END AS Amount,
CASE WHEN InvoiceType='CKT' THEN OrderAmount * -1.00 ELSE OrderAmount END AS OrderAmount,
(etc)
...
FROM ...
Thanks, #Tab Alleman
SELECT ... CASE WHEN InvoiceType='CKT' THEN Amount * -1.00 ELSE Amount END AS Amount,
CASE WHEN InvoiceType='CKT' THEN OrderAmount * -1.00 ELSE OrderAmount END AS
OrderAmount, (etc) ... FROM ... ------------------------------------------------------------------------
Looking for help in converting this to SQL Server 2008 friendly as I just can't work it out. I've tried cross applies and inner joins (not saying I did them right) to no avail... Any suggestions?
What this essentially does is have a table of stock and a table of orders.
and combine the two to show me what to pick once the stock is taken away (see my previous question for more details More Details)
WITH ADVPICK
AS (SELECT 'A' AS PlaceA,
placeb,
CASE
WHEN picktime = '00:00' THEN '07:00'
ELSE ISNULL(picktime, '12:00')
END AS picktime,
Cast(product AS INT) AS product,
prd_description,
-qty AS Qty
FROM t_pick_orders
UNION ALL
SELECT 'A' AS PlaceA,
placeb,
'0',
Cast(code AS INT) AS product,
NULL,
stock
FROM t_pick_stock),
STOCK_POST_ORDER
AS (SELECT *,
Sum(qty)
OVER (
PARTITION BY placeb, product
ORDER BY picktime ROWS UNBOUNDED PRECEDING ) AS new_qty
FROM ADVPICK)
SELECT *,
CASE
WHEN new_qty > qty THEN new_qty
ELSE qty
END AS order_shortfall
FROM STOCK_POST_ORDER
WHERE new_qty < 0
ORDER BY placeb,
picktime,
product
Now the whole sum over partition by order by is SQL Server 2012+ however I have two servers that run on 2008 and so need it converted...
Expected Results:
+--------+--------+----------+---------+-----------+-------+---------+-----------------+
| PlaceA | PlaceB | Picktime | product | Prd_Descr | qty | new_qty | order_shortfall |
+--------+--------+----------+---------+-----------+-------+---------+-----------------+
| BW | AMES | 16:00 | 1356 | Product A | -1330 | -17 | -17 |
| BW | AMES | 16:00 | 17 | Product B | -48 | -42 | -42 |
| BW | AMES | 17:00 | 1356 | Product A | -840 | -857 | -840 |
| BW | AMES | 18:00 | 1356 | Product A | -770 | -1627 | -770 |
| BW | AMES | 18:00 | 17 | Product B | -528 | -570 | -528 |
| BW | AMES | 19:00 | 1356 | Product A | -700 | -2327 | -700 |
| BW | AMES | 20:00 | 1356 | Product A | -910 | -3237 | -910 |
| BW | AMES | 20:00 | 8009 | Product C | -192 | -52 | -52 |
| BW | AMES | 20:00 | 897 | Product D | -90 | -10 | -10 |
+--------+--------+----------+---------+-----------+-------+---------+-----------------+
One straight-forward way to do it is to use a correlated sub-query in CROSS APPLY.
If your table is more or less large, then your next question would be how to make it fast. Index on PlaceB, Product, PickTime INCLUDE (Qty) should help. But, if your table is really large, cursor would be better.
WITH
ADVPICK
AS
(
SELECT 'A' as PlaceA,PlaceB, case when PickTime = '00:00' then '07:00' else isnull(picktime,'12:00') end as picktime, cast(Product as int) as product, Prd_Description, -Qty AS Qty FROM t_pick_orders
UNION ALL
SELECT 'A' as PlaceA,PlaceB, '0', cast(Code as int) as product, NULL, Stock FROM t_pick_stock
)
,stock_post_order
AS
(
SELECT
*
FROM
ADVPICK AS Main
CROSS APPLY
(
SELECT SUM(Sub.Qty) AS new_qty
FROM ADVPICK AS Sub
WHERE
Sub.PlaceB = Main.PlaceB
AND Sub.Product = Main.Product
AND T.PickTime <= Main.PickTime
) AS A
)
SELECT
*,
CASE WHEN new_qty > qty THEN new_qty ELSE qty END AS order_shortfall
FROM
stock_post_order
WHERE
new_qty < 0
ORDER BY PlaceB, picktime, product;
Oh, and if (PlaceB, Product, PickTime) is not unique, you'll get somewhat different results to original query with SUM() OVER. If you need exactly same results, you need to use some extra column (like ID) to resolve the ties.
If i have a table that keeps a running average of kW usage at a certain temperature, and I wanted to get a kW usage for a temperature that has not been recorded before, how could i get either
(A) Two data points above or two points below the temperature to extrapolate.
(B) Closest data above and below the temperature to interpolate
The table temperatures looks like this
Column | Type | Modifiers | Storage | Stats target | Description
-------------------------+------------------+-----------+---------+--------------+---------------
temperature_io_id | integer | not null | plain | |
temperature_station_id | integer | not null | plain | |
temperature_value | integer | not null | plain | | in Fahrenheit
temperature_current_kw | double precision | not null | plain | |
temperature_value_added | integer | default 1 | plain | |
temperature_kw_year_1 | double precision | default 0 | plain | |
"temperatures_pkey" PRIMARY KEY, btree (temperature_io_id, temperature_station_id, temperature_value)
(A) Proposed Solution
This would be a bit easier I think. The query would order the rows by the temperature value > or < the temperature im going for, then limit the results to 2? This would give me the two closest values that are above or below the temperature. Of course the order would have to be descending and ascending to make sure i get the right side of the items.
SELECT * FROM temperatures
WHERE
temperature_value > ACTUALTEMP and temperature_io_id = ACTUAL_IO_id
ORDER BY
temperature_value
LIMIT 2;
I think similar to above, but just limit it to 1 and do 2 queries, one for > and the other for <. I feel like this could be done better though?
Edit - Some sample data
temperature_io_id | temperature_station_id | temperature_value | temperature_current_kw | temperature_value_added | temperature_kw_year_1
-------------------+------------------------+-------------------+------------------------+-------------------------+-----------------------
18751 | 151 | 35 | 26.1 | 2 | 0
18752 | 151 | 35 | 30.5 | 2 | 0
18753 | 151 | 35 | 15.5 | 2 | 0
18754 | 151 | 35 | 12.8 | 2 | 0
18643 | 151 | 35 | 4.25 | 2 | 0
18644 | 151 | 35 | 22.15 | 2 | 0
18645 | 151 | 35 | 7.45 | 2 | 0
18646 | 151 | 35 | 7.5 | 2 | 0
18751 | 151 | 34 | 25.34 | 5 | 0
18752 | 151 | 34 | 30.54 | 5 | 0
18753 | 151 | 34 | 15.48 | 5 | 0
18754 | 151 | 34 | 13.08 | 5 | 0
18643 | 151 | 34 | 4.3 | 5 | 0
18644 | 151 | 34 | 22.44 | 5 | 0
18645 | 151 | 34 | 7.34 | 5 | 0
18646 | 151 | 34 | 7.54 | 5 | 0
You can get the nearest rows using:
select t.*
from temperatures t
order by abs(temperature_value - ACTUAL_TEMPERATURE) asc
limit 2
Or, a better idea in this case, is union:
(select t.*
from temperatures t
where temperature_value <= ACTUAL_TEMPERATURE
order by temperature_value desc
limit 1
) union
(select t.*
from temperatures t
where temperature_value >= ACTUAL_TEMPERATURE
order by temperature_value asc
limit 1
)
This version is better because it returns only one row if the temperature is in the table. This is a case where the UNION and duplicate removal is useful.
Next use conditional aggregation to get the information needed. This uses a short-cut, assuming that the kw increases with temperature:
select min(temperature_value) as mintv, max(temperature_value) as maxtv,
min(temperature_current_kw) as minck, max(temperature_current_kw) as maxck
from ((select t.*
from temperatures t
where temperature_value <= ACTUAL_TEMPERATURE
order by temperature_value desc
limit 1
) union
(select t.*
from temperatures t
where temperature_value >= ACTUAL_TEMPERATURE
order by temperature_value asc
limit 1
)
) t;
Finally, do some arithmetic to get the weighted average:
select (case when maxtv = mintv then minkw
else minkw + (ACTUAL_TEMPERATURE - mintv) * ((maxkw - minkw) / (maxtv - mintv))
end)
from (select min(temperature_value) as mintv, max(temperature_value) as maxtv,
min(temperature_current_kw) as minkw, max(temperature_current_kw) as maxkw
from ((select t.*
from temperatures t
where temperature_value <= ACTUAL_TEMPERATURE
order by temperature_value desc
limit 1
) union
(select t.*
from temperatures t
where temperature_value >= ACTUAL_TEMPERATURE
order by temperature_value asc
limit 1
)
) t
) t;