Query to find average stock ... with a twist - sql

We are trying to calculate average stock from a movements table in a single sql sentence.
As far as we are, no problem with what we thought was a standard approach, instead of adding up the daily stock and divide by the number of days, as we don’t have daily stock, we simply add (movements*remaining days) :
select sum(quantity*(END_DATE-move_date))/(END_DATE-START_DATE)
from move_table
where move_date<=END_DATE
This is a simplified example, in real life we already take care of the initial stock at the starting date. Let’s say there are no movements prior to start_date.
Quantity sign depends on move type (sale, purchase, inventory, etc).
Of course this is done grouping by product, warehouse, ... but you get the idea.
It works as expected and the calculus is fine.
But (there is always a “but”), our customer doesn’t like accounting days when there is no stock (all stock sold out). So, he doesnt like
Sum of (daily_stock) / number_of_days (which is what we calculate using a diferent math)
Instead, he would like
Sum of (daily stock) / number_of_days_in_which_stock_is_not_zero
For sure we can do this in any programming language without much effort, but I was wondering how to do it using plain sql ... and wasn’t able to come up with a solution.
Any suggestion?

Consider creating a new table called something like Stock_EndOfDay_History that has the following columns.
stock#
date
stock_count_eod
This table would get a new row for each stock item at the start of a new day for the prior day. Rows could then be purged from this table once the applicable date value went outside the date window of interest.
To get the "number_of_days_in_which_stock_is_not_zero", use this.
SELECT COUNT(*) AS 'Not_Zero_Stock_Days' FROM Stock_EndOfDay_History
WHERE stock# = <stock#_value>
AND <date_window_clause>
Other approaches might attempt to just add a new column to the existing stock table to maintain a cumulative sum of the " number_of_days_in_which_stock_is_not_zero". But inevitably, questions will be asked as to how did the non-zero stock days count get calculated? Using this new table approach will address those questions better than the new column approach.

Related

SQL GROUPING SETS averages with multiple many-to-many dimensions

I have a table of data with the following:
User,Platform,Dt,Activity_Flag,Total_Purchases
1,iOS,05/05/2016,1,1
1,Android,05/05/2016,1,2
2,iOS,05/05/2016,1,0
2,Android,05/05/2016,1,2
3,iOS,05/05/2016,1,1
3,Android,06/05/2016,1,3
1,iOS,06/05/2016,1,2
4,Android,06/05/2016,1,2
1,Android,06/05/2016,1,0
3,iOS,07/05/2016,1,2
2,iOS,08/05/2016,1,0
I want to do a GROUPING SETS (Platform,Dt,(Platform,Dt),()) aggregation to be able to find for each combination of Platform and Dt the following:
Total Purchases
Total Unique Users
Average Purchases per User per Day
The first two are simple as these can be achieved via a sum(Total_Purchases) and count(distinct user) respectively.
The problem I have is with the last metric. The result set should look like this but I don't know how to get the last column to be calculated correctly:
Platform,Dt,Total_Purchases,Total_Unique_Users,Average_Purchases_Per_User_Per_Day
Android,05/05/2016,4,2,2.0
iOS,05/05/2016,2,3,0.7
Android,06/05/2016,5,3,1.7
iOS,06/05/2016,2,1,2.0
iOS,07/05/2016,2,1,2.0
iOS,08/05/2016,0,1,0.0
,05/05/2016,6,3,2.0
,06/05/2016,7,3,2.3
,07/05/2016,1,1,1.0
,08/05/2016,1,1,1.0
Android,,9,4,1.8
iOS,,6,3,1.2
,,15,4,1.6
For the first ten rows we see that getting the Average purchase per user per day is a simple division of the first two columns as the dimension in these rows represent a single date only. But when we look at the final 3 rows we see that the division is not the way to achieve the desired result. This is because it needs to take an average for each day in turn to get the overall per day amount.
If this isn't clear please let me know and I'll be happy to explain better. This is my first post on this site!

Need to perform some not so straight forward data processing in Access 2010

I have a table in Access that is setup like the one in the photo. What I need to do is this:
For each part no, I want to sum the total Qty for each month and type (Ordered and Demand). Then I need to cap the qty in the rows where the type is = to Orders to the value of the Qty where the type is = to Orders, when the sum of the Qty for Ordered is greater than Demand. Let me try to explain it another way.
I want to look at a subset of the master data, in this case the subset is by part no (rows with identical part numbers). For this subset I want to have two sets of sums. 1. The sum of qty with type = Ordered AND 2. a sum of qty with type = Demand. If the sum for Ordered is greater than demand, I want to change the Qty for Ordered to be the value of the Qty for Demand.
Essentially, the business reason is that for reporting purposes the total Qty for Ordered shouldn't be more than Demand in a given month, for a part number.
Looking at the photo, the rows in red will need to change because the sum of the qty is 30, which is greater than the sum of qty for the green rows (25). The red rows qty should be changed to 20 and 5 to match the green rows.
Whew, hope this made sense because it is hard to explain. I have tried many things for a couple weeks now, and I am a bit fuzzy on the details so I will just give a high level. Ok so what have I tried:
I have tried to join the table to iself, using part no (and date I believe) to join on, but that doesn't work because the sum would somehow be incorrect sometimes.
Pivot the table, using the transform and pivot functions in Access but it's important for me to keep the individual dates in tact and when I pivoted it I had to roll it up on a month basis. This gives me the row structure I need to make the changes but I don't know how to get back the original date format after I am done.
I am guessing I need some VBA code that loops through each part no, but I am not big on VBA code and I don't have much time to learn it. Any suggestions? I know this is long winded but its a complicated problem (at least for me). Thanks in advance.

How to handle monthly and yearly values

I have a Fact table that holds what are more or less, sales goals. The ETL process that populates it, generates 12 "weighted" values into seperate rows, one per month. Each row however, also includes a field that holds the yearly value. I do this with unpivot. This all works. Now Im trying to get at this data in the cube with an SSRS report. The problem seems to be that I can query and see the results that include either the yearly goal values or the monthly, weighted values, but not both in the same set.
[update for fact table details]
My Fact table looks something like this:
FK_Account
FK_User
Target
Projected
GoalYear
FK_DateKey
FK_Dept
MonthlyWeightedTarget
MonthlyWeightedProjected
When I load this fact table via the ETL, I get the date key associated with each monthly value (MonthlyWeightedTarget). That will be 12 seperate records, but each one will have the same yearly value. Im not including next years value as a seperate column, because there are seperate records already associated with that year.
Basically, the users define a set of goals associated with a given year. Then I am applying a "weighting" to generate 12 seperate "monthly" records, which total up to the yearly target goal. Hope this makes sense.
What I need to see is something like this result:
Account Name
YTDgoal
YearGoal
NextYrGoal
I created a calculated member for the NextYrGoal, but now Im not sure I even need it.
What would be a good approach for handling the above (getting the ytd, yearly and next year values) ?
If I was getting at these values with TSQL, I would sum on the monthly values, and just include the associated yearly and next years values, grouping by account, year-goal, next-year-goal

Sql Queries for finding the sales trend

Suppose ,I have a table which has all the billing records. Now I want to see the sales trend for a user given time duration group by each 3 days ...what should be the sql query regarding this?
please help,Otherwise I am gone ...
I can only give a vague suggestion as per the question, however you may want to have a derived column with a standardised date (as per MS date format, just a number per day) that you could then use a modulus (3) on so that days are equal per 3 day period. You can then group and aggregate over this column to get the values for a 3 day period. Obviously to display the date nicely you would have to multiply back and convert your column as well.
Again I'm not sure of the specifics, but I think this general idea could be achieved to get a result (may well not be the best way so it would help to add more to the question...)

Predictive Ordering Logic

I have a problem and was wondering if anyone could help or if it is even possible to have an algorithm for something like this.
I need to create a predictive ordering wizard. So based on previous sales, we will determine that that a certain amount of an item is required. E.g 31 apples. Now i need to work out the number of cases that needs to be ordered. If the cases come in say 60, 30, 15, 10 apples, the order should be a case of 30 and a case of 10 apples.
The number of items that need to be ordered change in each row of the result set. The case sizes could also change for each item. So some items may have an option of 5 different cases and some items may land up with an option of only one case.
Other examples would be i need 39 cans of coke and the cases come in only 24 per case. Therefore needing 2 cases. I need 2 shots of baileys and the bottle of baileys come in 50cl or 70cl. Therefore i need the 50cl.
The results sets columns are ItemName, ItemSize, QuantityRequired, PackSize and PackSizeMultiple.
The ItemName is the item to be ordered. ItemSize is the size the item is used in eg. can of coke. QuantityRequired how man of the item, in this case cans of coke, need to be ordered. PackSize is the size of the case. PackSizeMultiple is the number to multiply the item with to work out how many of the items are in the case.
ps. this will be a query in SQL Server 2008
Sounds like you need a UOM (Unit of Measure) table and a function to calc co-pack measure count and and unit count measure qty. with UOM type based on time between orders. You would also need to create a cron cycle and freeze table managed by week/time interval in order to create a freeze view of the current qty sold each week and the number of units since last order. Based on the 2 previous orders to your prior order you would set the current prediction based on min time between the last 2 freeze cycles containing an order and the duration of days between them. based on the average time between orders and the unit qty in each order, you can create a unit decay ratio percentage based on days and store it in each slice forward. Based on a reference to this data you will be able to create a prediction that will allow you to trigger a notice to sales or a message to the client to reorder. In addition, if you engage response data from sales based on unit count feedback from the client, you can reference an actual and tune your decay rate against your prediction. You should also consider managing and rolling up these freezes by month, so that you can view historical trending and forecast revenue based on velocity of reorder and same period last year. Basically this is similar to sales forcasting and we are switching out your opportunity percentage of close with Predicted Remaining Qty. percentage remaining.