Background
I need to implement inventory valuation / costing using the FIFO (first-in, first-out) method.
I'm running Postgres 11 running on CentOS 7.
I've looked at, and tried, a fair number of hypotheses from SO and the wider internet (as well as searching my own print library which includes SQL Queries for Mere Mortals, PostgreSQL Up & Running, The SQL Cookbook, Practical Issues In Database Management, and other quality reference works), and to date, I can't find a solution that works for closing inventory valuation.
(I've also tried reasoning it out on my own, but have failed to come up with a plausible appraoch)
NOTE In my case, I have permission to change the table structure, etc, of the setup, so I can add / remove / change anything in the setup as needed (such as, e.g., adding a direction column to the movements table, as some approaches I've tried have indicated, changing queries, etc etc)
Current setup
I have a table mockup_inv_movements:
CREATE TABLE the_schema.mockup_inv_movements (
id INTEGER NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
created_at TIMESTAMP WITH TIME ZONE DEFAULT now(),
sku TEXT,
adjustment_quantity NUMERIC,
unit_cost NUMERIC(19,2),
po_num INTEGER
);
and this view mockup_inv_movements_with_fifo_cost adds FIFO cost for sale / 'out' rows, calculated from a query (shown later below):
CREATE VIEW the_schema.mockup_inv_movements_with_fifo_cost AS (
select
i.id,
i.created_at,
i.po_num,
i.sku,
i.adjustment_quantity,
i.unit_cost,
m.fifo_unit_cost
FROM
the_schema.mockup_inv_movements i
LEFT OUTER JOIN
the_schema.fifo_hypothesis_2_mockup m
ON
i.id = m.id
ORDER BY i.id
);
Adding some test inventory movement data:
-- insert receipt / 'in' records
INSERT INTO the_schema.mockup_inv_movements (sku, adjustment_quantity, unit_cost, po_num, created_at )
VALUES ('foo_product',100,4,123, now()+'1 hour'), ('foo_product',10,3,987, now()+'2 hour'), ('foo_product',20,7,223, now()+'3 hours')
;
INSERT INTO the_schema.mockup_inv_movements (sku, adjustment_quantity, unit_cost, po_num, created_at )
VALUES ('bar_product',100,5,123, now()+'4 hours'),('bar_product',30,6,963, now()+'5 hours'),('bar_product',50,8,223, now()+'6 hours'),('bar_product',5,5,456, now()+'7 hours')
;
--insert sale / 'out' records
INSERT INTO the_schema.mockup_inv_movements (sku, adjustment_quantity, unit_cost, po_num, created_at )
VALUES ('bar_product',-50,null,null, now()+'8 hours'),('bar_product',-30, null,null, now()+'9 hours'),
('bar_product',-20,null,null, now()+'10 hours'),('bar_product',-10,null,null, now()+'11 hours')
;
INSERT INTO the_schema.mockup_inv_movements (sku, adjustment_quantity, unit_cost, po_num, created_at )
VALUES ('foo_product',-70,null,null, now()+'12 hours'), ('foo_product',-5,null,null, now()+'13 hours'),
('foo_product',-20,null,null, now()+'14 hours'),('foo_product',-10,null,null, now()+'15 hours')
;
OK, now here's the query that calculates the 'sale/out' price for each, taken from this question, which seems to work; note that I'm only pulling in the column fifo_unit_cost from this query at the moment:
CREATE VIEW the_schema.fifo_hypothesis_2_mockup AS (
SELECT
id,
sku,
created_at AT TIME ZONE 'mst',
qty_sold,
-- 5
round((cumulative_sold_cost - coalesce(lag(cumulative_sold_cost) over w, 0))/qty_sold, 2) as fifo_unit_cost,
qty_bought,
prev_bought,
total_cost,
prev_total_cost,
cumulative_sold_cost,
coalesce(lag(cumulative_sold_cost) over w, 0) as prev_cumulative_sold_cost
FROM (
SELECT id,
tneg.sku,
created_at,
qty_sold,
tpos.qty_bought,
prev_bought,
total_cost,
prev_total_cost,
-- 4
round(prev_total_cost + ((tneg.cumulative_sold - tpos.prev_bought)/(tpos.qty_bought - tpos.prev_bought))*(total_cost-prev_total_cost), 2) as cumulative_sold_cost
FROM (
SELECT
id,
sku,
created_at,
-(adjustment_quantity) as qty_sold,
sum(-(adjustment_quantity)) over w as cumulative_sold
FROM the_schema.mockup_inv_movements
WHERE adjustment_quantity < 0
WINDOW w AS (PARTITION BY sku ORDER BY created_at)
-- 1
) tneg
LEFT JOIN (
SELECT
sku,
sum(adjustment_quantity) over w as qty_bought,
coalesce(sum(adjustment_quantity) over prevw, 0) as prev_bought,
adjustment_quantity * unit_cost as cost,
sum(adjustment_quantity * unit_cost) over w as total_cost,
coalesce(sum(adjustment_quantity * unit_cost) over prevw, 0) as prev_total_cost
FROM the_schema.mockup_inv_movements
WHERE adjustment_quantity > 0
WINDOW w AS (PARTITION BY sku ORDER BY created_at),
prevw AS (PARTITION BY sku ORDER BY created_at ROWS BETWEEN unbounded preceding AND 1 preceding)
-- 2
) tpos
-- 3
ON
((tneg.cumulative_sold > tpos.prev_bought )
AND ( tneg.cumulative_sold <= tpos.qty_bought ))
AND tneg.sku = tpos.sku
) t
WINDOW w AS (PARTITION BY sku ORDER BY created_at)
ORDER BY id
)
;
Now here's the part where I'm having trouble.
I need to calculate the value of remaining stock / inventory on hand, also known as "closing stock" or "closing inventory." I've tried a number of approaches including this question and this 'set-based speed phreakery' method, the latter of which I readily admit that I don't fully comprehend,
The approach that has come closest to working for me is this older hypothesis from Ranjeet Rana, BUT although it does seem to assign the FIFO costs according to the correct breakdown, the sum of closing stock for each SKU does not seem to match the raw difference between 'in' and 'out' quantities.
Here's the closing stock query adapted from Rana (comments mine; I left them in just in case they might indicate where my error is).
CREATE VIEW the_schema.closing_inv_hyp_3 AS (
select *,
case
when cumulative>0 and adjustment_quantity>=cumulative -- note that sale/out adjustment_quantity / cumulative is always less than zero
then cumulative*cost -- in this case, some amount of this row's receipt has been sold, and the remainder qty is shown in 'cumulative'
when cumulative>0 and adjustment_quantity<cumulative
then adjustment_quantity*cost -- in this case, none of this row's receipt has been sold, and so the entire adjustment amount is multiplied by unit cost
else 0 -- sale rows are assigned zero for this column
end as closing_stock
from (
select
*, -- all rows from subquery
sum(adjustment_quantity) over (order by srl) as cumulative -- THIS is the problematic column
from (
select
0 as srl, -- this ensures that all 'sale / out' rows float to the top
id,
sku,
adjustment_quantity,
COALESCE(fifo_unit_cost,unit_cost) AS cost,
created_at
from
the_schema.mockup_inv_movements_with_fifo_cost
where adjustment_quantity < 0 -- SALE / OUT only
UNION -- gets all from both queries (less any dupes)
select
row_number() over(order by created_at) as srl, -- this assigns a synthetic sequential row number to the 'PO / in' rows and ensures the are pushed to the bottom
id,
sku,
adjustment_quantity,
COALESCE(fifo_unit_cost,unit_cost) AS cost,
created_at
from
the_schema.mockup_inv_movements_with_fifo_cost
where
adjustment_quantity > 0 -- PO / IN only
ORDER BY srl
)as tab
) as maintab
);
With this in place, we should be able to get the sum of closing stock value per SKU with:
SELECT
sku,
sum(closing_stock) as closing_stock_sum_value
FROM
the_schema.closing_inv_hyp_3
WHERE closing_stock > 0
GROUP BY sku
ORDER by sku
;
However, as I mentioned, the totals do not match up with the basic inventory difference calculation (specifically in this test example, I would expect 75 units of bar_product to be represented in closing stock, whereas this query shows 100):
srl | id | sku | adjustment_quantity | cost | created_at | cumulative | closing_stock
-----+-----+-------------+---------------------+------+-------------------------------+------------+---------------
0 | 102 | foo_product | -70 | 4.00 | 2022-03-10 07:27:05.447572+00 | -215 | 0
0 | 100 | bar_product | -20 | 5.00 | 2022-03-10 05:27:05.447572+00 | -215 | 0
0 | 101 | bar_product | -10 | 6.00 | 2022-03-10 06:27:05.447572+00 | -215 | 0
0 | 103 | foo_product | -5 | 4.00 | 2022-03-10 08:27:05.447572+00 | -215 | 0
0 | 105 | foo_product | -10 | 3.50 | 2022-03-10 10:27:05.447572+00 | -215 | 0
0 | 99 | bar_product | -30 | 5.00 | 2022-03-10 04:27:05.447572+00 | -215 | 0
0 | 98 | bar_product | -50 | 5.00 | 2022-03-10 03:27:05.447572+00 | -215 | 0
0 | 104 | foo_product | -20 | 4.00 | 2022-03-10 09:27:05.447572+00 | -215 | 0
1 | 91 | foo_product | 100 | 4.00 | 2022-03-09 20:27:05.447572+00 | -115 | 0
2 | 92 | foo_product | 10 | 3.00 | 2022-03-09 21:27:05.447572+00 | -105 | 0
3 | 93 | foo_product | 20 | 7.00 | 2022-03-09 22:27:05.447572+00 | -85 | 0
4 | 94 | bar_product | 100 | 5.00 | 2022-03-09 23:27:05.447572+00 | 15 | 75.00
5 | 95 | bar_product | 30 | 6.00 | 2022-03-10 00:27:05.447572+00 | 45 | 180.00
6 | 96 | bar_product | 50 | 8.00 | 2022-03-10 01:27:05.447572+00 | 95 | 400.00
7 | 97 | bar_product | 5 | 5.00 | 2022-03-10 02:27:05.447572+00 | 100 | 25.00
(15 rows)
It seems like this would be the kind of thing that has a more-or-less standardized solution, but so far none of the resources I've found / tried has guided me to a working approach.
How can I accurately do FIFO closing stock / inventory valuation in Postgres?
All guidance much appreciated!
Using "Set-based Speed Phreakery: The FIFO Stock Inventory SQL Problem" as an example, re-working that approach for Postgres and the change of table/columns produces this query:
/* Sum up the ins and outs to calculate the remaining stock level */
WITH cteStockSum
AS ( SELECT sku ,
SUM(adjustment_quantity) AS TotalStock
FROM mockup_inv_movements
GROUP BY sku
)
, cteReverseInSum
AS ( SELECT s.sku ,
s.created_at ,
( SELECT SUM(i.adjustment_quantity)
FROM mockup_inv_movements AS i
WHERE i.sku = s.sku
AND i.adjustment_quantity > 0
AND i.created_at >= s.created_at
) AS RollingStock ,
s.adjustment_quantity AS ThisStock
FROM mockup_inv_movements AS s
WHERE s.adjustment_quantity > 0
)
/* Using the rolling balance above find the first stock movement in that meets
(or exceeds) our required stock level */
/* and calculate how much stock is required from the earliest stock in */
, cteWithLastTranDate
AS ( SELECT w.sku ,
w.TotalStock ,
LastPartialStock.created_at ,
LastPartialStock.StockToUse ,
LastPartialStock.RunningTotal ,
w.TotalStock - LastPartialStock.RunningTotal
+ LastPartialStock.StockToUse AS UseThisStock
FROM cteStockSum AS w
CROSS JOIN LATERAL ( SELECT
z.created_at ,
z.ThisStock AS StockToUse ,
z.RollingStock AS RunningTotal
FROM cteReverseInSum AS z
WHERE z.sku = w.sku
AND z.RollingStock >= w.TotalStock
ORDER BY z.created_at DESC
LIMIT 1
) AS LastPartialStock
)
/* Sum up the cost of 100% of the stock movements in after the returned stockid and for that stockid we need 'UseThisStock' items' */
SELECT y.sku ,
y.TotalStock AS CurrentItems ,
SUM(CASE WHEN e.created_at = y.created_at THEN y.UseThisStock
ELSE e.adjustment_quantity
END * Price.unit_cost) AS CurrentValue
FROM cteWithLastTranDate AS y
INNER JOIN mockup_inv_movements AS e
ON e.SKU = y.SKU
AND e.created_at >= y.created_at
AND e.adjustment_quantity > 0
CROSS JOIN LATERAL (
/* Find the Price of the item in */ SELECT
p.unit_cost
FROM mockup_inv_movements AS p
WHERE p.SKU = e.SKU
AND p.created_at <= e.created_at
AND p.adjustment_quantity > 0
ORDER BY p.created_at DESC
LIMIT 1
) AS Price
GROUP BY y.sku ,y.TotalStock
ORDER BY y.sku
and from your sample data the result produced is this:
+-------------+--------------+--------------+
| sku | currentitems | currentvalue |
+-------------+--------------+--------------+
| bar_product | 75 | 545.00 |
| foo_product | 25 | 155.00 |
+-------------+--------------+--------------+
also see: https://dbfiddle.uk/?rdbms=postgres_11&fiddle=f564a6cfda3374c2057b437f845a4bdf
DB-Fiddle:
CREATE TABLE logistics (
id int auto_increment primary key,
flow_date DATE,
flow_type VARCHAR(255),
flow_quantity INT
);
INSERT INTO logistics
(flow_date, flow_type, flow_quantity
)
VALUES
("2020-04-18", "inbound", "500"),
("2020-04-18", "outbound", "400"),
("2020-04-18", "stock", "100"),
("2020-04-19", "inbound", "800"),
("2020-04-19", "outbound", "650"),
("2020-04-19", "stock", "250"),
("2020-04-20", "inbound", "730"),
("2020-04-20", "outbound", "600"),
("2020-04-20", "stock", "380"),
("2020-04-21", "inbound", "420"),
("2020-04-21", "outbound","370"),
("2020-04-21", "stock", "430");
Expected Result:
flow_date stock_yesterday inbound outbound stock_today
2020-04-18 0 500 -400 100
2020-04-19 100 800 -650 250
2020-04-20 250 730 -600 380
2020-04-21 380 420 -370 430
Basically, in my result I want to show this timelime: stock_yesterday + inbound - outbound = stock_today.
Therefore, I need to change the original table like the following:
a) The flow_types are used as columns in the result.
a) The stock_yesterday is the flow_quantity of the flow_type stock of the previous day.
b) All other flow_types refer to the same flow_date.
So far I came up with this query but could not make it work:
SELECT
flow_date,
(CASE WHEN flow_type = "inbound" THEN flow_quantity END) AS inbound,
(CASE WHEN flow_type = "outbound" THEN flow_quantity END) AS outbound,
(CASE WHEN flow_type = "stock" THEN flow_quantity END) AS stock_today
FROM logistics
GROUP BY 1;
It only displays the inbound.
I also have no clue how I could add the stock_yesterday to the query.
What do I need to change in my query to get the expected result?
You can use window functions and aggregation:
select
flow_date,
sum(inbound + outbound)
over(order by flow_date rows between unbounded preceding and 1 preceding) stock_yesterday,
inbound,
outbound,
sum(inbound + outbound) over(order by flow_date) stock_today
from (
select
flow_date,
sum(case when flow_type = 'inbound' then flow_quantity else 0 end) inbound,
sum(case when flow_type = 'outbound' then -flow_quantity else 0 end) outbound
from logistics
group by flow_date
) t
order by flow_date
The subquery is not stricly necessary, but it helps shortening the syntax.
Demo on DB Fiddle:
flow_date | stock_yesterday | inbound | outbound | stock_today
:--------- | --------------: | ------: | -------: | ----------:
2020-04-18 | null | 500 | -400 | 100
2020-04-19 | 100 | 800 | -650 | 250
2020-04-20 | 250 | 730 | -600 | 380
2020-04-21 | 380 | 420 | -370 | 430
I have a table Like this :
---------------------------------------------------------------
| UserID | Amount | PayDate |TransactionType| ...
----------------------------------------------------------------
| 1 | 140 | 2014-09-30 22:00:00.000| 7 |
| 2 | 230 | 2014-09-30 22:00:00.000| 7 |
| 1 | 120 | 2014-08-01 22:00:00.000| 7 |
| 2 | 135 | 2014-07-30 22:00:00.000| 7 |
| 1 | 120 | 2014-09-30 22:00:00.000| 4 |
----------------------------------------------------------------
I wrote the below query but it returns NULL, Please advise on this query as is:
The declared below dates are between 29/09/2014 and 1/10/2014
Declare
#dateStart datetime= CONVERT(VARCHAR(25),DATEADD(dd,-(DAY(GETUTCDATE())+2),GETUTCDATE()),101),
#dateEnd datetime=(CONVERT(VARCHAR(25),DATEADD(dd,-(DAY(GETUTCDATE())-1),GETUTCDATE()),101))
Select
MemberID,
case
when transactionType = 7
and (PayDate between #dateStart and #dateEnd) then Amount
End AS 'Outstanding Amount'
from
MemberPayment
My output should be :
| MemberID | OutStanding Amount|
---------------------------------
| 1 | 140 |
| 2 | 230 |
but the query returns null, what am I doing wrong ? Is the CASE When DATE between DATES used correct in SQL Server 2008 R2 ?
PS: Please note I do not want to change the query to have WHERE Condition.
Thank you in advance stack overflow family.
This should do the work
Declare
#dateStart datetime= DATEADD(dd,-(DAY(GETUTCDATE())+2),GETUTCDATE()),
#dateEnd datetime=DATEADD(dd,-(DAY(GETUTCDATE())-1),GETUTCDATE())
select MemberID, [Outstanding Amount]
from
(
Select
UserID as MemberID,
case
when transactionType = 7
and (PayDate between #dateStart and #dateEnd) then Amount
End AS 'Outstanding Amount'
from
MemberPayment
) As TmpQuery
where [Outstanding Amount] is not null
I removed the convert to varchar from both of your variables.
Then i put a select around your query, to filter just the results with Oustanding Amount not NULL.
Please take note, that I selected UserID as MemberID, because u used UserID in your example.
I tested it with a table, where PayDate is a Datetime Column.
As already mentioned in one of your comments i would prefer the easy method (and it`s much faster!):
Declare
#dateStart datetime= DATEADD(dd,-(DAY(GETUTCDATE())+2),GETUTCDATE()),
#dateEnd datetime=DATEADD(dd,-(DAY(GETUTCDATE())-1),GETUTCDATE())
select UserID, Amount as [Outstanding Amount]
from MemberPayment
where TransactionType = '7'
and PayDate between #dateStart and #dateEnd
I'm trying to compile some lifetime value information for customers within one of our databases.
We have an MS SQL Server database which stores all of our customer/transactional information.
My issue is that I don't have much experience when it comes to MS SQL Server (or SQL in general) - I'd like to be able to run a query against the database that pulls AVG number of loans, and AVG revenue based on three criteria:
1.) Loans be counted if they are 'approved'
2.) Loans from a customer_id only be counted if the first loan (first identified by date_created field) be on or after a certain 'mm/yyyy'
3.) I'm able to specify for how many months after the first 'mm/yyyy' to tally the number of loans / revenue to be included within the AVG
Here is what the database would look like:
customer_id | loan_status | date_created | revenue
111 | 'approved' | 2010-06-20 17:17:09 | 100.00
222 | 'approved' | 2010-06-21 09:54:43 | 255.12
333 | 'denied' | 2011-06-21 12:47:30 | NULL
333 | 'approved' | 2011-06-21 12:47:20 | 56.87
222 | 'denied' | 2011-06-21 09:54:48 | NULL
222 | 'approved' | 2011-06-21 09:54:18 | 50.00
111 | 'approved' | 2011-06-20 17:17:23 | 100.00
... loads' of records ...
555 | 'approved' | 2012-01-02 09:08:42 | 24.70
111 | 'denied' | 2012-01-05 02:10:36 | NULL
666 | 'denied' | 2012-02-05 03:31:16 | NULL
555 | 'approved' | 2012-02-17 09:32:26 | 197.10
777 | 'approved' | 2012-04-03 18:28:45 | 300.50
777 | 'approved' | 2012-06-28 02:42:01 | 201.80
555 | 'approved' | 2012-06-21 22:16:59 | 10.00
666 | 'approved' | 2012-09-30 01:17:20 | 50.00
If I wanted to find the avg transaction count (approved transactions), and average revenue per approved transaction for all customer's who's first loan was in/after 2012-01, and for a period of 4 months after then, how would I go about querying the database?
Any help is greatly appreciated.
something like this (there maybe a few typos here and there)...
you could first calculate the minimum loan date:
select customer_id, min(date_created) from table t where loan_status = 'approved' group by customer_id
then you can join to it:
select customer_id, count(date_created), avg(revenue) from table t
join (
select customer_id, min(date_created) as min_date from table t where loan_status = 'approved' group by customer_id ) s
on t.customer_id = s.customer_id
where t.date_created between s.min_date and DATEADD(month, 4, s.min_date) and t.loan_status = 'approved'
Rename tbl to your table name.
Specify dates in the format YYYYMMDD.
select customer_id, AVG(revenue) average_revenue
from
(
select customer_id
from tbl
group by customer_id
having min(date_created) >= '20120101'
) fl
join tbl t on t.customer_id = fl.customer_id
where t.loan_status = 'approved'
and date_created < '20120501' -- NOT including May the first, so Jan through Apr (4 months)
If you mean 4 months after each customer's first loan, leave me a comment, state whether it's 4 calendar months (e.g. 15-Jan to 15-May) or up to the last day of the 4th month (15-Jan to 30-Apr), and I'll update the answer.