Optimization or Related Rates to solve this problem? - calculus

The problem: A company has $120 000 to spend on the development and promotion of a new product. The company estimates that if x is spent on the development and y is spent on promotion, then approximately (x^(1/2)y^(3/2))/(400000) items of new product will be sold. Based on this estimate, what is the maximum number of products that the company can sell?
Not sure if this is an optimization or related rates problem, but even then I am not sure as to how to start it. I know the answer is supposed to be 11691.

The critical observation here is to notice that y = 120000-x, so your expression simplifies to:
On the plane,
The max occurs at x=30000 (you can verify analytically by finding zeroes of the first derivative); substituting, you'll find the max number of products to be 11,691.

Related

Is there a simple algorithm or process for maximizing quantity


Is there a simple algorithm or process for maximizing quantity. Assume there are two products, A and B. The price of each changes each day and is independent of the other. You start with 100 units of A. Each day you can exchange (sell and buy) one product for the other. Your objective is to increase your quantity of A over say 100 days/iterations. What process do you use?
Price of A
Quantity
Price of B
Quantity
$10
100
$43
0
$11
-
$39
-
$12
-
$41
-
Note: I’m using prices and products in this example, but the problem could involve any countable thing with a variable feature.
I've modeled this process with excel/numbers using combinations of buy hi/lo, random, etc., with decent results, but I'm sure this is a problem that has already been studied. I just haven't found much on this topic in my research so far.
Is there a simple algorithm or process for maximizing quantity
To make it a little easier you can reduce the problem to a single unknown variable - "price of A / price of B". To maximize your quantity of A; you want to exchange all of your A for B when "price of A / price of B" will decrease on the next day/iteration, and exchange all of your B for A when "price of A / price of B" will increase on the next day/iteration.
However, to do this (to guarantee you maximize the quantity of A) you have to accurately predict the future with no mistakes.
If you can't predict the future with no mistakes, the best you can do is rely on "statistical probability" to try to increase your quantity of A (with a risk that you will fail and reduce your quantity of A, and an extremely low chance that you'll "maximize your quantity of A by accident").
If you can't predict the future at all, then it just becomes pure luck (better to do nothing and keep the quantity of A you already have). For an example; over 100 days/iterations, you could spend the first 50 days gathering information about how prices change (e.g. calculate "past min/max/average prices" and maybe find that "price of A / price of B" was always between 0.2 and 0.3 for the first 50 days/iterations); but it'd be foolish to merely assume that the past predicts the future in any way at all (e.g. the "price of A / price of B" might suddenly jump in any direction and never return to the range of previously seen values).
In other words; to improve your probability of increasing your quantity of A you need to improve your ability to predict the future; and to improve your ability to predict the future you need more information than what was provided.

Can AMPL handle this recursively or is a remodeling neccessary?

I'm using AMPL to model a production where I have two particular constraints that I am not very sure how to handle.
subject to Constraint1 {t in T}:
prod[t] = sum{i in I} x[i,t]*u[i] + Recycle[f]*RecycledU[f];
subject to Constraint2 {t in T}:
Solditems[t]+Recycle[t]=prod[t];
EDIT: where x[i,t] is the amount of products from supply point i. u[i] denotes the "exchange rate" of the raw material from supply point i to create the product. I.E. a percentage of the raw material will become the finished products, whereas some raw material will go to waste. The same is true for RecycledU[f] where f is in F, which denotes the refinement station where it has been refined. The difference is that RecycledU[f] has a much lower percentage that will go to waste due to Recycled already being a finished product from f (albeitly a much less profitable one). I.e. Recycle has already "went through" the process of being a raw material earlier, x, but has become a finished product in some earlier stage, or hopefully (if it can be modelled) in the same time period as this. In the actual models things as "products" and "refinement station" is existent as well, but I figured for this question those could be abandoned to keep it more simple.
What I want to accomplish is that the amount of products produced is the sum of all items sold in time period t and the amount of products recycled in time period t (by recycled I mean that the finished product is kept at the production site for further refinement in some timestep g, g>t).
Is it possible to write two equal signs for prod[t] like I have done? Also, how to handle Recycle[t]? Can AMPL "understand" that since these are represented at the same time step, that AMPL must handle the constraints recursively, i.e. compute a solution for Recycle[t] and subsequently try to improve that solution in every timestep?
EDIT: The time periods are expressed in years which is why I want to avoid having an expression with Recycle[t-1].
EDIT2: prod and x are parameters and Recycle and Solditems are variables.
Hope anyone can shed some light into this!
Cenderze
The two constraints will be considered simultaneously (unless you explicitly exclude one from the problem). AMPL or optimization solvers don't have the notion of time steps and the complete problem is considered at the same time, so you might need to add some linking constraints between time periods yourself to model time periods. In particular, you might need to make sure that the inventory (such as the amount finished product is kept at the production site for further refinement) is carried over from one period to another, something like:
Recycle[t + 1] = Recycle[t] - RecycleDecrease + RecycleIncrease;
You have to figure out the expressions for the amounts by which Recycle is increased (RecycleIncrease) and decreased (RecycleDecrease).
Also if you want some kind of an iterative procedure with one constraint considered at a time instead, then you should use AMPL script.

SQL Server Payment estimate probabilty - best maths equations and recursive query?

I have a table which holds a list of transactions.
Task: To estimate the next transaction amount.
Problem:
The actual payment periods for each rows is a varible, which can be weekly, monthly or anything choosen by the end user.
To estimate the next payment, based on previous data, can anyone suggest a good method?
At the moment I basically take the figure back to the daily amount then multiple by period i.e. week/month/q/year. Then given the history, choose the result that has the highest incidence (count).
This does not generate an accuarate estimations due to payments within payments that I dont need to care about i.e. £100 real payment but +20 for addition charges that are irrelevant.
Another way is to calculate the average,std,varience between payments then choose the highest probability.
Problem is, i've been unable to code this in SQL.
SELECT [Identifier]
,[DateTranEntered]
,[Type]
,[TranDateFrom],
,[TranDateTo]
,[Amount]
,[ReferenceForTran]
,[CreatedDate]
FROM .[TranTable]
Perhaps something with recursion through the table and calculate every transaction daily amount then with the variance, incidence - choose from the last 'x' what the estimate guess is ?
Problem is I have gotten stuck with the resurive query for this.
Any thoughts about this?
SQL Server Analysis services has a suite of data mining tools that provide algorithms such as Linear Regressions, Decision Trees and Neural Networks. You can learn more about them here: http://msdn.microsoft.com/en-us/library/ms175595.aspx. It sounds like Linear Regressions might be the best place to start for this problem.

A better way to generate pricing based on f(N)

I have in game currency in my game. For a user to buy the next upgrade I currently use a very simple method whereby the Nth upgrade costs N*1000 coins.
Im not a massive fan of using this at the moment as I'd like it to be a bit easier to start off with and possibly scale better so its not quite as hard to get upgrades.
One solution would be to use Fibonnacci which gives great early results but would make later upgrades nigh on impossible.
Can anyone offer a solution as my maths knowledge is pretty limited
What about sigmoid function? It starts to rise slowly, then it rises nearly linearly and at the end it starts to slow down.
If you look at the graph at wolfram alpha, you can calculate your price like this:
price = a_bit_more_than_maximum_upgrade_price * sigmoid( x )
You have to choose what multiple of the maximum price will be the price of the starting upgrade, if you choose starting x=-4 you'll get some price less than 5% of maximum. Ending x could be equal to 4. You'll reach around 95% of maximum price. Then you have number of upgrades. You could calculate the input for sigmoid like this:
x = (upgrade_index / (number_of_upgrades-1)) * 8.0 - 4.0
Upgrade index is starting from zero and you have to have at least 2 upgrades :)
You can trim off last few digits or round them up to get nicer looking numbers.
This seems like a question more related for http://programmers.stackoverflow.com
But anyway, I would say try use an exponential function, something like
f(n) = 1000 * 1.1^n
Obviously once you have 100 or more upgrades the price gets a bit ridiculous, you can then perhaps use a condition to check if n is larger than a certain number, then resume with your linear function to calculate the price of the next upgrade.

Ranking algorithm in a rails app

We have a model in our ralis app whose objects are assigned a score based on positive user actions. We'll call them products for simplicity sake. If a user likes a product or buys a product or views a product, the score is incremented at various weights (a like might be worth more than a view, two views in the span of 30 seconds might be worth more than three views spread over an hour, etc.)
We'd like to use these scores to help sort and rank products, say for a popular products list, but for various reasons -- using the straight ranking is going to unevenly favor older products, since they'll have more time to amass a higher score.
My question is, how to normalize the scores between new and old products. I thought about dividing the products score by a unit of time, say the number of days it's been in existence, but am worried that will cut down the older products too much. Any thoughts on the best way to fairly normalize the scores between the old and new products?
I'm also considering an example of a bayesian rating system I found in another question:
rating = ((avg_num_votes * avg_rating) + (product_num_votes * product_rating)) / (avg_num_votes + product_num_votes)
Where theavg numbers are calculated by looking at the scores across all products that have more than one vote (or in our case, a positive action). This might not be the best way, because we don't have a negative rating in our system and it doesn't take time into consideration at all.
Your question reminds me the concept of Exponential Discounting Cash Flow in finance.
The concept is the following : 100$ in two years worth less than 100$ in one year, which worth less than 100$ now, ...
I think that we can make a good comparison here : a product of yesterday worth more that a product of the day before but less than a product of today.
The formula is simple :
Vn = V0 * (1-t)^n
with V0 the initial value (the real number of positives votes), t a discount rate (you have to fix it, like 10%) and n the time passed (for example n days). Thus a product will lose 10% of his value each day (but 10% of the precedent day, not of the initial value).
You can also see Hyperbolic discounting that is closer of your try. The formula can be sometyhing like that I guess :
Vn = V0 * (1/(1+k*n))
An other approach, simpler, but crudest : linear discounting. You can simply give an initial value for the scores, like 1000 and each day, you decrement all scores by 1 (or an other constant).
Vn = V0 - k*n