Is there a simple algorithm or process for maximizing quantity - optimization


Is there a simple algorithm or process for maximizing quantity. Assume there are two products, A and B. The price of each changes each day and is independent of the other. You start with 100 units of A. Each day you can exchange (sell and buy) one product for the other. Your objective is to increase your quantity of A over say 100 days/iterations. What process do you use?
Price of A
Quantity
Price of B
Quantity
$10
100
$43
0
$11
-
$39
-
$12
-
$41
-
Note: I’m using prices and products in this example, but the problem could involve any countable thing with a variable feature.
I've modeled this process with excel/numbers using combinations of buy hi/lo, random, etc., with decent results, but I'm sure this is a problem that has already been studied. I just haven't found much on this topic in my research so far.

Is there a simple algorithm or process for maximizing quantity
To make it a little easier you can reduce the problem to a single unknown variable - "price of A / price of B". To maximize your quantity of A; you want to exchange all of your A for B when "price of A / price of B" will decrease on the next day/iteration, and exchange all of your B for A when "price of A / price of B" will increase on the next day/iteration.
However, to do this (to guarantee you maximize the quantity of A) you have to accurately predict the future with no mistakes.
If you can't predict the future with no mistakes, the best you can do is rely on "statistical probability" to try to increase your quantity of A (with a risk that you will fail and reduce your quantity of A, and an extremely low chance that you'll "maximize your quantity of A by accident").
If you can't predict the future at all, then it just becomes pure luck (better to do nothing and keep the quantity of A you already have). For an example; over 100 days/iterations, you could spend the first 50 days gathering information about how prices change (e.g. calculate "past min/max/average prices" and maybe find that "price of A / price of B" was always between 0.2 and 0.3 for the first 50 days/iterations); but it'd be foolish to merely assume that the past predicts the future in any way at all (e.g. the "price of A / price of B" might suddenly jump in any direction and never return to the range of previously seen values).
In other words; to improve your probability of increasing your quantity of A you need to improve your ability to predict the future; and to improve your ability to predict the future you need more information than what was provided.

Related

Optimization or Related Rates to solve this problem?

The problem: A company has $120 000 to spend on the development and promotion of a new product. The company estimates that if x is spent on the development and y is spent on promotion, then approximately (x^(1/2)y^(3/2))/(400000) items of new product will be sold. Based on this estimate, what is the maximum number of products that the company can sell?
Not sure if this is an optimization or related rates problem, but even then I am not sure as to how to start it. I know the answer is supposed to be 11691.
The critical observation here is to notice that y = 120000-x, so your expression simplifies to:
On the plane,
The max occurs at x=30000 (you can verify analytically by finding zeroes of the first derivative); substituting, you'll find the max number of products to be 11,691.

Can AMPL handle this recursively or is a remodeling neccessary?

I'm using AMPL to model a production where I have two particular constraints that I am not very sure how to handle.
subject to Constraint1 {t in T}:
prod[t] = sum{i in I} x[i,t]*u[i] + Recycle[f]*RecycledU[f];
subject to Constraint2 {t in T}:
Solditems[t]+Recycle[t]=prod[t];
EDIT: where x[i,t] is the amount of products from supply point i. u[i] denotes the "exchange rate" of the raw material from supply point i to create the product. I.E. a percentage of the raw material will become the finished products, whereas some raw material will go to waste. The same is true for RecycledU[f] where f is in F, which denotes the refinement station where it has been refined. The difference is that RecycledU[f] has a much lower percentage that will go to waste due to Recycled already being a finished product from f (albeitly a much less profitable one). I.e. Recycle has already "went through" the process of being a raw material earlier, x, but has become a finished product in some earlier stage, or hopefully (if it can be modelled) in the same time period as this. In the actual models things as "products" and "refinement station" is existent as well, but I figured for this question those could be abandoned to keep it more simple.
What I want to accomplish is that the amount of products produced is the sum of all items sold in time period t and the amount of products recycled in time period t (by recycled I mean that the finished product is kept at the production site for further refinement in some timestep g, g>t).
Is it possible to write two equal signs for prod[t] like I have done? Also, how to handle Recycle[t]? Can AMPL "understand" that since these are represented at the same time step, that AMPL must handle the constraints recursively, i.e. compute a solution for Recycle[t] and subsequently try to improve that solution in every timestep?
EDIT: The time periods are expressed in years which is why I want to avoid having an expression with Recycle[t-1].
EDIT2: prod and x are parameters and Recycle and Solditems are variables.
Hope anyone can shed some light into this!
Cenderze
The two constraints will be considered simultaneously (unless you explicitly exclude one from the problem). AMPL or optimization solvers don't have the notion of time steps and the complete problem is considered at the same time, so you might need to add some linking constraints between time periods yourself to model time periods. In particular, you might need to make sure that the inventory (such as the amount finished product is kept at the production site for further refinement) is carried over from one period to another, something like:
Recycle[t + 1] = Recycle[t] - RecycleDecrease + RecycleIncrease;
You have to figure out the expressions for the amounts by which Recycle is increased (RecycleIncrease) and decreased (RecycleDecrease).
Also if you want some kind of an iterative procedure with one constraint considered at a time instead, then you should use AMPL script.

A better way to generate pricing based on f(N)

I have in game currency in my game. For a user to buy the next upgrade I currently use a very simple method whereby the Nth upgrade costs N*1000 coins.
Im not a massive fan of using this at the moment as I'd like it to be a bit easier to start off with and possibly scale better so its not quite as hard to get upgrades.
One solution would be to use Fibonnacci which gives great early results but would make later upgrades nigh on impossible.
Can anyone offer a solution as my maths knowledge is pretty limited
What about sigmoid function? It starts to rise slowly, then it rises nearly linearly and at the end it starts to slow down.
If you look at the graph at wolfram alpha, you can calculate your price like this:
price = a_bit_more_than_maximum_upgrade_price * sigmoid( x )
You have to choose what multiple of the maximum price will be the price of the starting upgrade, if you choose starting x=-4 you'll get some price less than 5% of maximum. Ending x could be equal to 4. You'll reach around 95% of maximum price. Then you have number of upgrades. You could calculate the input for sigmoid like this:
x = (upgrade_index / (number_of_upgrades-1)) * 8.0 - 4.0
Upgrade index is starting from zero and you have to have at least 2 upgrades :)
You can trim off last few digits or round them up to get nicer looking numbers.
This seems like a question more related for http://programmers.stackoverflow.com
But anyway, I would say try use an exponential function, something like
f(n) = 1000 * 1.1^n
Obviously once you have 100 or more upgrades the price gets a bit ridiculous, you can then perhaps use a condition to check if n is larger than a certain number, then resume with your linear function to calculate the price of the next upgrade.

Bloomberg - get real world prices from the API

For a number of financial instruments, Bloomberg scales the prices that are shown in the Terminal - for example:
FX Futures at CME: e.g. ADZ3 Curncy (Dec-2013 AUD Futures at CME) show as 93.88 (close on 04-Oct-2013), whereas the actual (CME) market/settlement price was 0.9388
FX Rates: sometimes FX rates are scaled - this may vary by which way round the FX rate is asked for, so EURJPY Curncy (i.e. JPY per EUR) has a BGN close of 132.14 on 04-Oct-2013. The inverse (EUR per JPY) would be 0.007567. However, for JPYEUR Curncy (i.e. EUR per JPY), BGN has a close of 0.75672 for 04-Oct-2013.
FX Forwards: Depending on whether you are asking for rates or forward points (which can be set by overrides)... if you ask for rates, you might get these in terms of the original rate, so for EURJPY1M Curncy, BGN has a close of 132.1174 on 04-Oct-2013. But if you ask for forward points, you would get these scaled by some factor - i.e. -1.28 for EURJPY1M Curncy.
Now, I am not trying to criticise Bloomberg for the way that they represent this data in the Terminal. Goodness only knows when they first wrote these systems, and they have to maintain the functionality that market practitioners have come to know and perhaps love... In that context, scaling to the significant figures might make sense.
However, when I am using the API, I want to get real-world, actual prices. Like... the actual price at the exchange or the actual price that you can trade EUR for JPY.
So... how can I do that?
Well... the approach that I have come to use is to find the FLDS that communicate this scaling information, and then I fetch that value to reverse the scale that they have applied to the values. For futures, that's PX_SCALING_FACTOR. For FX, I've found PX_POS_MULT_FACTOR most reliable. For FX forward points, it's FWD_SCALE.
(It's also worth mentioning that how these are applied vaires - so PX_SCALING_FACTOR is what futures prices should be divided by, PX_POS_MULT_FACTOR is what FX rates should be multipled by, and FWD_SCALE is how many decimal places to divide the forward points by to get to a value that can be added to the actual FX rate.)
The problem with that is that it doubles the number of fetchs I have to make, which adds a significant overhead to my use of the API (reference data fetches also seem to take longer than historical data fetches.) (FWIW, I'm using the API in Java, but the question should be equally applicable to using the API in Excel or any of the other supported languages.)
I've thought about finding out this information and storing it somewhere... but I'd really like to not have to hard code that. Also, that would require to spend a very long time finding out the right scaling factors for all the different instruments I'm interested in. Even then, I would have no guarantee that they wouldn't change their scale on me at some point!
What I would really like to be able to do is apply an override in my fetch that would allow me specify what scale should be used. (And no, the fields above do not seem to be override-able.) I've asked the "helpdesk" about this on lots and lots of occasions - I've been badgering them about it for about 12 months, but as ever with Bloomberg, nothing seems to have happened.
So...
has anyone else in the SO community faced this problem?
has anyone else found a way of setting this as an override?
has anyone else worked out a better solution?
Short answer: you seem to have all the available information at hand and there is not much more you can do. But these conventions are stable over time so it is fine to store the scales/factors instead of fetching the data everytime (the scale of EURGBP points will always be 4).
For FX, I have a file with:
number of decimal (for spot, points and all-in forward rate)
points scale
spot date
To answer you specific questions:
FX Futures at CME: on ADZ3 Curncy > DES > 3:
For this specific contract, the price is quoted in cents/AUD instead of exchange convention USD/AUD in order to show greater precision for both the futures and options. Calendar spreads are also adjusted accordingly. Please note that the tick size has been adjusted by 0.01 to ensure the tick value and contract value are consistent with the exchange.
Not sure there is much you can do about this, apart from manually checking the factor...
FX Rates: PX_POS_MULT_FACTOR is your best bet indeed - note that the value of that field for a given pair is extremely unlikely to change. Alternatively, you could follow market conventions for pairs and AFAIK the rates will always be the actual rate. So use EURJPY instead of JPYEUR. The major currencies, in order, are: EUR, GBP, AUD, NZD, USD, CAD, CHF, JPY. For pairs that don't involve any of those you will have to fetch the info.
FX Forwards: the points follow the market conventions, but the scale can vary (it is 4 most of the time, but it is 3 for GBPCZK for example). However it should not change over time for a given pair.

Ranking algorithm in a rails app

We have a model in our ralis app whose objects are assigned a score based on positive user actions. We'll call them products for simplicity sake. If a user likes a product or buys a product or views a product, the score is incremented at various weights (a like might be worth more than a view, two views in the span of 30 seconds might be worth more than three views spread over an hour, etc.)
We'd like to use these scores to help sort and rank products, say for a popular products list, but for various reasons -- using the straight ranking is going to unevenly favor older products, since they'll have more time to amass a higher score.
My question is, how to normalize the scores between new and old products. I thought about dividing the products score by a unit of time, say the number of days it's been in existence, but am worried that will cut down the older products too much. Any thoughts on the best way to fairly normalize the scores between the old and new products?
I'm also considering an example of a bayesian rating system I found in another question:
rating = ((avg_num_votes * avg_rating) + (product_num_votes * product_rating)) / (avg_num_votes + product_num_votes)
Where theavg numbers are calculated by looking at the scores across all products that have more than one vote (or in our case, a positive action). This might not be the best way, because we don't have a negative rating in our system and it doesn't take time into consideration at all.
Your question reminds me the concept of Exponential Discounting Cash Flow in finance.
The concept is the following : 100$ in two years worth less than 100$ in one year, which worth less than 100$ now, ...
I think that we can make a good comparison here : a product of yesterday worth more that a product of the day before but less than a product of today.
The formula is simple :
Vn = V0 * (1-t)^n
with V0 the initial value (the real number of positives votes), t a discount rate (you have to fix it, like 10%) and n the time passed (for example n days). Thus a product will lose 10% of his value each day (but 10% of the precedent day, not of the initial value).
You can also see Hyperbolic discounting that is closer of your try. The formula can be sometyhing like that I guess :
Vn = V0 * (1/(1+k*n))
An other approach, simpler, but crudest : linear discounting. You can simply give an initial value for the scores, like 1000 and each day, you decrement all scores by 1 (or an other constant).
Vn = V0 - k*n