I'm trying to build a simple program to price call options using the black scholes formula http://en.wikipedia.org/wiki/Black%E2%80%93Scholes. I'm trying to figure our the best way to get probabilities from a normal distribution. For example if I were to do this by hand and I got the value of as d1=0.43 than I'd look up 0.43 in this table http://www.math.unb.ca/~knight/utility/NormTble.htm and get the value 0.6664.
I believe that there are no functions in c or objective-c to find the normal distribution. I'm also thinking about creating a 2 dimensional array and looping through until I find the desired value. Or maybe I can define 300 doubles with the corresponding value and loop through those until I get the appropriate result. Any thoughts on the best approach?
You need to define what it is you are looking for more clearly. Based on what you posted, it appears you are looking for the cumulative distribution function or P(d < d1) where d1 is measured in standard deviations and d is a normal distribution: by your example, if d1 = 0.43 then P(d < d1) = 0.6664.
The function you want is called the error function erf(x) and there are some good approximations for it.
Apparently erf(x) is part of the standard math.h in C. (not sure about objective-c but I assume it probably contains it as well).
But erf(x) is not exactly the function you need. The general form P(d < d1) can be calculated from erf(x) in the following formula:
P(d<d1) = f(d1,sigma) = (erf(x/sigma/sqrt(2))+1)/2
where sigma is the standard deviation. (in your case you can use sigma = 1.)
You can test this on Wolfram Alpha for example: f(0.43,1) = (erf(0.43/sqrt(2))+1)/2 = 0.666402 which matches your table.
There are two other things that are important:
If you are looking for P(d < d1) where d1 is large (greater in absolute value than about 3.0 * sigma) then you should really be using the complementary error function erfc(x) = 1-erf(x) which tells you how close P(d < d1) is to 0 or 1 without running into numerical errors. For d1 < -3*sigma, P(d < d1) = (erf(d1/sigma/sqrt(2))+1)/2 = erfc(-d1/sigma/sqrt(2))/2, and for d1 > 3*sigma, P(d < d1) = (erf(d1/sigma/sqrt(2))+1)/2 = 1 - erfc(d1/sigma/sqrt(2))/2 -- but don't actually compute that; instead leave it as 1 - K where K = erfc(d1/sigma/sqrt(2))/2. For example, if d1 = 5*sigma, then P(d < d1) = 1 - 2.866516*10-7
If for example your programming environment doesn't have erf(x) built into the available libraries, you need a good approximation. (I thought I had an easy one to use but I can't find it and I think it was actually for the inverse error function). I found this 1969 article by W. J. Cody which gives a good approximation for erf(x) if |x| < 0.5, and it's better to use erf(x) = 1 - erfc(x) for |x| > 0.5. For example, let's say you want erf(0.2) ≈ 0.2227025892105 from Wolfram Alpha; Cody says evaluate with x * R(x2) where R is a rational function you can get from his table.
If I try this in Javascript (coefficients from Table II of the Cody paper):
// use only for |x| <= 0.5
function erf1(x)
{
var y = x*x;
return x*(3.6767877 - 9.7970565e-2*y)/(3.2584593 + y);
}
then I get erf1(0.2) = 0.22270208866303123 which is pretty close, for a 1st-order rational function. Cody gives tables of coefficients for rational functions up to degree 4; here's degree 2:
// use only for |x| <= 0.5
function erf2(x)
{
var y = x*x;
return x*(21.3853322378 + 1.72227577039*y + 0.316652890658*y*y)
/ (18.9522572415 + 7.8437457083*y + y*y);
}
which gives you erf2(0.2) = 0.22270258922638206 which is correct out to 10 decimal places. The Cody paper also gives you similar formulas for erfc(x) where |x| is between 0.5 and 4.0, and a third formula for erfc(x) where |x| > 4.0, and you can check your results with Wolfram Alpha or known erfc(x) tables for accuracy if you like.
Hope this helps!
Related
I have 4 non negative real variable that are A, B, C and X. Based on the current problem that I have, I notice that the variable X must belong to the interval of [B,C] and the relation will be a bunch of if-else conditions like this:
If A < B:
x = B
elseif A > C:
x = C
elseif B<=A<=C:
x = A
As you can see, it quite difficult to reformulate as a Mixed Integer Programming problem with corresponding decision variable (d1, d2 and d3). I have try reading some instructions regarding if-then formulation using big M method at this site:
https://www.math.cuhk.edu.hk/course_builder/1415/math3220/L2%20(without%20solution).pdf but it seem that this problem is more challenging than their tutorial.
Could you kindly provide me with a formulation for this situation ?
Thank you very much !
I'm looking to fit a model to estimate multiple probabilities for binomial data with Stan. I was using beta priors for each probability, but I've been reading about using hyperpriors to pool information and encourage shrinkage on the estimates.
I've seen this example to define the hyperprior in pymc, but I'm not sure how to do something similar with Stan
#pymc.stochastic(dtype=np.float64)
def beta_priors(value=[1.0, 1.0]):
a, b = value
if a <= 0 or b <= 0:
return -np.inf
else:
return np.log(np.power((a + b), -2.5))
a = beta_priors[0]
b = beta_priors[1]
With a and b then being used as parameters for the beta prior.
Can anybody give me any pointers on how something similar would be done with Stan?
To properly normalize that, you need a Pareto distribution. For example, if you want a distribution p(a, b) ∝ (a + b)^(-2.5), you can use
a + b ~ pareto(L, 1.5);
where a + b > L. There's no way to normalize the density with support for all values greater than or equal to zero---it needs a finite L as a lower bound. There's a discussion of using just this prior as the count component of a hierarchical prior for a simplex.
If a and b are parameters, they can either both be constrained to be positive, or you can leave a unconstrained and declare
real<lower = L - a> b;
to insure a + b > L. L can be a small constant or something more reasonable given your knowledge of a and b.
You should be careful because this will not identify a + b. We use this construction as a hierarchical prior for simplexes as:
parameters {
real<lower = 1> kappa;
real<lower = 0, upper = 1> phi;
vector<lower = 0, upper = 1>[K] theta;
model {
kappa ~ pareto(1, 1.5); // power law prior
phi ~ beta(a, b); // choose your prior for theta
theta ~ beta(kappa * phi, kappa * (1 - phi)); // vectorized
There's an extended example in my Stan case study of repeated binary trials, which is reachable from the case studies page on the Stan web site (the case study directory is currently linked under the documentation link from the users tab).
Following suggestions in the comments I'm not sure that I will follow this approach, but for reference I thought I'd at least post the answer to my question of how this could be accomplished in Stan.
After some asking around on Stan Discourses and further investigation I found that the solution was to set a custom density distribution and use the target += syntax. So the equivalent for Stan of the example for pymc would be:
parameters {
real<lower=0> a;
real<lower=0> b;
real<lower=0,upper=1> p;
...
}
model {
target += log((a + b)^-2.5);
p ~ beta(a,b)
...
}
I have an arbitrary set of constraints. For example:
A, B, C and D are 8-bit integers.
A + B + C + D = 50
(A + B) = 25
(C + D) = 30
A < 10
I can convert this to a SAT problem that can be solved by picosat (I can't get minisat to compile on my Mac), or to a SMT problem that can be solved by CVC4. To do that I need to:
Map these equations into a Boolean circuit.
Apply Tseitin's transformation to the circuit and convert it into DIMACS format.
My questions:
What tools do I use to do the conversion to the circuit?
What are the file formats for circuits and which ones are commonly used?
What tools do I use to transform the circuit to DIMACS format?
Conceptually
Build a circuit, then apply Tseitin's transform.
You'll need to express the addition and comparison operators as boolean logic. There are standard ways to build a circuit for twos-complement addition and for twos-complement comparison.
Then, use Tseitin's transform to convert this to a SAT instance.
In practice
Use a SAT front-end that will do this conversion for you. Z3 will take care of this for you. So will STP. (The conversion is sometimes known as "bit-blasting".)
In MiniZinc, you could just write a constraint programming model:
set of int: I8 = 0..255;
var I8: A;
var I8: B;
var I8: C;
var I8: D;
constraint A + B + C + D == 50;
constraint (A + B) = 25;
constraint (C + D) = 30;
constraint A < 10;
solve satisfy;
The example constraints cannot be satisfied, because of 25 + 30 > 50.
The Python interface of Z3 would allow the following:
from z3 import *
A, B, C, D = Ints('A B C D')
s = Solver()
s.add(A >= 0, A < 256)
s.add(B >= 0, B < 256)
s.add(C >= 0, C < 256)
s.add(A+B+C+D == 50)
s.add((A+B) == 25)
s.add((C+D) == 30)
s.add(A < 10)
print(s.check())
print(s.model())
So I have an answer. There's a program called Sugar: a SAT-based Constraint Solver which takes a series of constraints as S-expressions, converts the whole thing into a DIMACS file, runs the SAT solver, and then converts the results of the SAT solver back to the results of your constraitns.
Sugar was developed by Naoyuki Tamura to solve math puzzles like Sudoku puzzles. I found that it make it extremely simple to code complex constraints and run them.
For example, to find the square-root of 625, one could do this:
(int X 0 625)
(= (* X X) 625)
The first line says that X ranges from 0 to 625. The second line says that X*X is 625.
This may not be as simple and elegant as Z3, but it worked really well.
I am processing a series of points which all have the same Y value, but different X values. I go through the points by incrementing X by one. For example, I might have Y = 50 and X is the integers from -30 to 30. Part of my algorithm involves finding the distance to the origin from each point and then doing further processing.
After profiling, I've found that the sqrt call in the distance calculation is taking a significant amount of my time. Is there an iterative way to calculate the distance?
In other words:
I want to efficiently calculate: r[n] = sqrt(x[n]*x[n] + y*y)). I can save information from the previous iteration. Each iteration changes by incrementing x, so x[n] = x[n-1] + 1. I can not use sqrt or trig functions because they are too slow except at the beginning of each scanline.
I can use approximations as long as they are good enough (less than 0.l% error) and the errors introduced are smooth (I can't bin to a pre-calculated table of approximations).
Additional information:
x and y are always integers between -150 and 150
I'm going to try a couple ideas out tomorrow and mark the best answer based on which is fastest.
Results
I did some timings
Distance formula: 16 ms / iteration
Pete's interperlating solution: 8 ms / iteration
wrang-wrang pre-calculation solution: 8ms / iteration
I was hoping the test would decide between the two, because I like both answers. I'm going to go with Pete's because it uses less memory.
Just to get a feel for it, for your range y = 50, x = 0 gives r = 50 and y = 50, x = +/- 30 gives r ~= 58.3. You want an approximation good for +/- 0.1%, or +/- 0.05 absolute. That's a lot lower accuracy than most library sqrts do.
Two approximate approaches - you calculate r based on interpolating from the previous value, or use a few terms of a suitable series.
Interpolating from previous r
r = ( x2 + y2 ) 1/2
dr/dx = 1/2 . 2x . ( x2 + y2 ) -1/2 = x/r
double r = 50;
for ( int x = 0; x <= 30; ++x ) {
double r_true = Math.sqrt ( 50*50 + x*x );
System.out.printf ( "x: %d r_true: %f r_approx: %f error: %f%%\n", x, r, r_true, 100 * Math.abs ( r_true - r ) / r );
r = r + ( x + 0.5 ) / r;
}
Gives:
x: 0 r_true: 50.000000 r_approx: 50.000000 error: 0.000000%
x: 1 r_true: 50.010000 r_approx: 50.009999 error: 0.000002%
....
x: 29 r_true: 57.825065 r_approx: 57.801384 error: 0.040953%
x: 30 r_true: 58.335225 r_approx: 58.309519 error: 0.044065%
which seems to meet the requirement of 0.1% error, so I didn't bother coding the next one, as it would require quite a bit more calculation steps.
Truncated Series
The taylor series for sqrt ( 1 + x ) for x near zero is
sqrt ( 1 + x ) = 1 + 1/2 x - 1/8 x2 ... + ( - 1 / 2 )n+1 xn
Using r = y sqrt ( 1 + (x/y)2 ) then you're looking for a term t = ( - 1 / 2 )n+1 0.36n with magnitude less that a 0.001, log ( 0.002 ) > n log ( 0.18 ) or n > 3.6, so taking terms to x^4 should be Ok.
Y=10000
Y2=Y*Y
for x=0..Y2 do
D[x]=sqrt(Y2+x*x)
norm(x,y)=
if (y==0) x
else if (x>y) norm(y,x)
else {
s=Y/y
D[round(x*s)]/s
}
If your coordinates are smooth, then the idea can be extended with linear interpolation. For more precision, increase Y.
The idea is that s*(x,y) is on the line y=Y, which you've precomputed distances for. Get the distance, then divide it by s.
I assume you really do need the distance and not its square.
You may also be able to find a general sqrt implementation that sacrifices some accuracy for speed, but I have a hard time imagining that beating what the FPU can do.
By linear interpolation, I mean to change D[round(x)] to:
f=floor(x)
a=x-f
D[f]*(1-a)+D[f+1]*a
This doesn't really answer your question, but may help...
The first questions I would ask would be:
"do I need the sqrt at all?".
"If not, how can I reduce the number of sqrts?"
then yours: "Can I replace the remaining sqrts with a clever calculation?"
So I'd start with:
Do you need the exact radius, or would radius-squared be acceptable? There are fast approximatiosn to sqrt, but probably not accurate enough for your spec.
Can you process the image using mirrored quadrants or eighths? By processing all pixels at the same radius value in a batch, you can reduce the number of calculations by 8x.
Can you precalculate the radius values? You only need a table that is a quarter (or possibly an eighth) of the size of the image you are processing, and the table would only need to be precalculated once and then re-used for many runs of the algorithm.
So clever maths may not be the fastest solution.
Well there's always trying optimize your sqrt, the fastest one I've seen is the old carmack quake 3 sqrt:
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
That said, since sqrt is non-linear, you're not going to be able to do simple linear interpolation along your line to get your result. The best idea is to use a table lookup since that will give you blazing fast access to the data. And, since you appear to be iterating by whole integers, a table lookup should be exceedingly accurate.
Well, you can mirror around x=0 to start with (you need only compute n>=0, and the dupe those results to corresponding n<0). After that, I'd take a look at using the derivative on sqrt(a^2+b^2) (or the corresponding sin) to take advantage of the constant dx.
If that's not accurate enough, may I point out that this is a pretty good job for SIMD, which will provide you with a reciprocal square root op on both SSE and VMX (and shader model 2).
This is sort of related to a HAKMEM item:
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost
circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse
centered at the origin with its size
determined by the initial point.
epsilon determines the angular
velocity of the circulating point, and
slightly affects the eccentricity. If
epsilon is a power of 2, then we don't
even need multiplication, let alone
square roots, sines, and cosines! The
"circle" will be perfectly stable
because the points soon become
periodic.
The circle algorithm was invented by
mistake when I tried to save one
register in a display hack! Ben Gurley
had an amazing display hack using only
about six or seven instructions, and
it was a great wonder. But it was
basically line-oriented. It occurred
to me that it would be exciting to
have curves, and I was trying to get a
curve display hack with minimal
instructions.
My inner loop contains a calculation that profiling shows to be problematic.
The idea is to take a greyscale pixel x (0 <= x <= 1), and "increase its contrast". My requirements are fairly loose, just the following:
for x < .5, 0 <= f(x) < x
for x > .5, x < f(x) <= 1
f(0) = 0
f(x) = 1 - f(1 - x), i.e. it should be "symmetric"
Preferably, the function should be smooth.
So the graph must look something like this:
.
I have two implementations (their results differ but both are conformant):
float cosContrastize(float i) {
return .5 - cos(x * pi) / 2;
}
float mulContrastize(float i) {
if (i < .5) return i * i * 2;
i = 1 - i;
return 1 - i * i * 2;
}
So I request either a microoptimization for one of these implementations, or an original, faster formula of your own.
Maybe one of you can even twiddle the bits ;)
Consider the following sigmoid-shaped functions (properly translated to the desired range):
error function
normal CDF
tanh
logit
I generated the above figure using MATLAB. If interested here's the code:
x = -3:.01:3;
plot( x, 2*(x>=0)-1, ...
x, erf(x), ...
x, tanh(x), ...
x, 2*normcdf(x)-1, ...
x, 2*(1 ./ (1 + exp(-x)))-1, ...
x, 2*((x-min(x))./range(x))-1 )
legend({'hard' 'erf' 'tanh' 'normcdf' 'logit' 'linear'})
Trivially you could simply threshold, but I imagine this is too dumb:
return i < 0.5 ? 0.0 : 1.0;
Since you mention 'increasing contrast' I assume the input values are luminance values. If so, and they are discrete (perhaps it's an 8-bit value), you could use a lookup table to do this quite quickly.
Your 'mulContrastize' looks reasonably quick. One optimization would be to use integer math. Let's say, again, your input values could actually be passed as an 8-bit unsigned value in [0..255]. (Again, possibly a fine assumption?) You could do something roughly like...
int mulContrastize(int i) {
if (i < 128) return (i * i) >> 7;
// The shift is really: * 2 / 256
i = 255 - i;
return 255 - ((i * i) >> 7);
A piecewise interpolation can be fast and flexible. It requires only a few decisions followed by a multiplication and addition, and can approximate any curve. It also avoids the courseness that can be introduced by lookup tables (or the additional cost in two lookups followed by an interpolation to smooth this out), though the lut might work perfectly fine for your case.
With just a few segments, you can get a pretty good match. Here there will be courseness in the color gradients, which will be much harder to detect than courseness in the absolute colors.
As Eamon Nerbonne points out in the comments, segmentation can be optimized by "choos[ing] your segmentation points based on something like the second derivative to maximize detail", that is, where the slope is changing the most. Clearly, in my posted example, having three segments in the middle of the five segment case doesn't add much more detail.