Compute FOVX when FOVY > 180° - camera

This is related to this question. The formula fieldOfViewX = 2 * atan(tan(fieldOfViewY * 0.5) * aspect) works but when FOVY > 180° for fisheyes camera, it doesn't work anymore. It is possible to adapt this formula to make it work?

Related

Proper way of adding CGAL points api-wise

I have a triangle defined by its three vertices. The vertices are of type Point = CGAL::Point_2<K> with a Simple_cartersian<double> kernel.
I want to randomly sample this triangle and for that I use a formula (https://math.stackexchange.com/questions/18686/uniform-random-point-in-triangle-in-3d) which adds the three vertices of the triangle multiplied by some random factors.
Point = Point(0, 0) + //
(1 - std::sqrt(r1)) * (standardTriangle[0] - Point(0, 0)) + //
(std::sqrt(r1) * (1 - r2)) * (standardTriangle[1] - Point(0, 0)) +
(r2 * std::sqrt(r1)) * (standardTriangle[2] - Point(0, 0)));
This looks very cumbersome, as I need to convert the points to vector by substracting Point(0,0) and then I need to add everything to a Point on the origin.
Looks more natural to just do something like the following
Point = (1 - std::sqrt(r1)) * standardTriangle[0] + //
(std::sqrt(r1) * (1 - r2)) * standardTriangle[1] +
(r2 * std::sqrt(r1)) * standardTriangle[2]);
Adding and removing points from the origin is really the only way to sum points, even though mathematically this is not correct?
You might want to use the barycenter() function.
In case you need another sampling there is one available in CGAL. See here

Using vDSP_biquad as a one pole filter

I'd like to be able to use the vDSP_biquad function as a one pole filter.
My one pole filter looks like this :
output[i] = onePole->z1 = input[i] * onePole->a0 + onePole->z1 * onePole->b1;
where
b1 = exp(-2.0 * M_PI * (_frequency / sampleRate));
a0 = 1.0 - b1;
This one pole works great, but of course it's not optimized, which is why I'd like to use the Accelerate Framework to speed it up.
Because vDSP_biquad uses the Direct Form II of the biquad implementation, it seems to me I should be able to set the coefficients to use it as a one-pole filter. https://en.wikipedia.org/wiki/Digital_biquad_filter#Direct_form_2
filter->omega = 2 * M_PI * freq / sampleRate;
filter->b1 = exp(-filter->omega);
filter->b0 = 1 - filter->b1;
filter->b2 = 0;
filter->a1 = 0;
filter->a2 = 0;
However, this does not work as a one pole filter. (The implementation of biquad is fine, I use it for many other filter types, it's just these coefficients don't have the desired effect).
What am I doing wrong?
Also open to hearing other ways to optimize a one-pole filter with Accelerate or otherwise.
The formula in the Apple docs is:
y[n] = b0*x[n] + b1*x[n-1] + b2*x[n-2] - a1*y[n-1] - a2*y[n-2]
In your above code, you're using b1 which is two inputs ago. For a one-pole, you'll need to use the previous output, y[n-1].
So I think the coefficients you want are:
a1 = -exp(-2.0 * M_PI * (_frequency / sampleRate))
b0 = 1.0 + a1

Difference between these 2 functions?

I have 2 degree-to-radian functions pre-defined using #define:
Function 1:
#define degreesToRadians(degrees) (M_PI * degrees / 180.0)
Function 2:
#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)
Only the 2nd function returns correct answer, while the first one provides weird answer. What are the differences between them?
Non of the two "functions" mentioned above is a functions, they are macros, and the first macro is not safe, for example, expanding the macro degreesToRadians(10 + 10) gives (M_PI * 10 + 10 / 180.0), which is interpreted as ((M_PI * 10) + (10 / 180.0)) and this is clearly wrong. While expanding DEGREES_TO_RADIANS(10 + 10) gives ((10 + 10 ) / 180.0 * M_PI) which is correct.
The other difference is that M_PI * degreess might overflow the double boundaries, and thus give a wrong answer (but this requires a rather high value in degrees)
The calculations are pretty much identical, notwithstanding floating point limitations. However, you have angle surrounded with parentheses in the second macro, which is the right thing to do.
In the first macro, if you do:
x = degreesToRadians(a + 45);
then, remembering that macros are simple text substitutions, you'll end up with:
x = (M_PI * a + 45 / 180.0);
which will not end well, since it will be calculated as if you'd written:
x = (M_PI * a) + (45 / 180.0);
In other words, you simply multiply the angle by PI and add a constant 0.25.
If instead you change the first one to be:
#define degreesToRadians(degrees) (M_PI * (degrees) / 180.0)
then it should begin to behave a little better.
The other difference has to do with either large or small values of the angle. A divide-then-multiply on a small angle (and I mean really small like 10-308, approaching the IEEE754 limits) may result in a zero result while a multiply-then-divide on a large angle (like 10308) may give you overflow.
My advice would be to ensure you use "normal" angles (or normalise them before conversion). Provided you do that, the different edge conditions of each method shouldn't matter.
And, in all honesty, you probably shouldn't even be using macros for this. With insanely optimising compilers and enumerations, macros should pretty much be relegated to conditional compilation nowadays. I'd simply rewrite it as a function along the lines of:
double degreesToRadians(double d) {
return M_PI * d / 180.0;
}
Or, you could even adjust the code so as to not worry about small or large angles (if you're paranoid):
double degreesToRadians(double d) {
if ((d > -1) && (d < 1))
return (M_PI * d) / 180.0;
return (d / 180.0) * M_PI;
}

using a loop to change color of pixels according to calculations

I am just starting to learn jython, and just have a question which I cannot seem to get right.
From my text, I am to create a picture that is 640 x 480 pixels, and then, using a loop, pixel by pixel set the color to a calculation for r, g, b which we have already been given.
I can create a picture, I can set variables, however I cannot seem to go any further in creating a loop to set each pixel colour.
I know its only simple, but just wandering if anyone can help me out here.
xrange() will create a generator which yields integers in a range. for will loop once per element of an iterable.
for row in xrange(480):
for col in xrange(640):
...
This may help you to iterate through the pixels.
picture = makeEmptyPicture(400,200)
pixels = getPixels(picture)
#make an empty picture and get the pixels
for px in getPixels(picture):
x=getX(px)
y=getY(px)
r = (sin(x * radian * id[1]) * cos(y * radian * id[4]) + 1) * ord(StringID[0]) * 2.5
g = (sin(x * radian * id[2]) * cos(y * radian * id[5]) + 1) * ord(StringID[0]) * 2.5
b = (sin(x * radian * id[3]) * cos(y * radian * id[6]) + 1) * ord(StringID[0]) * 2.5
newColor=makeColor(255 - r, 255 - g, 255 - b)
setColor(px, newColor)
show(picture)
repaint(picture)

Fast formula for a "high contrast" curve

My inner loop contains a calculation that profiling shows to be problematic.
The idea is to take a greyscale pixel x (0 <= x <= 1), and "increase its contrast". My requirements are fairly loose, just the following:
for x < .5, 0 <= f(x) < x
for x > .5, x < f(x) <= 1
f(0) = 0
f(x) = 1 - f(1 - x), i.e. it should be "symmetric"
Preferably, the function should be smooth.
So the graph must look something like this:
.
I have two implementations (their results differ but both are conformant):
float cosContrastize(float i) {
return .5 - cos(x * pi) / 2;
}
float mulContrastize(float i) {
if (i < .5) return i * i * 2;
i = 1 - i;
return 1 - i * i * 2;
}
So I request either a microoptimization for one of these implementations, or an original, faster formula of your own.
Maybe one of you can even twiddle the bits ;)
Consider the following sigmoid-shaped functions (properly translated to the desired range):
error function
normal CDF
tanh
logit
I generated the above figure using MATLAB. If interested here's the code:
x = -3:.01:3;
plot( x, 2*(x>=0)-1, ...
x, erf(x), ...
x, tanh(x), ...
x, 2*normcdf(x)-1, ...
x, 2*(1 ./ (1 + exp(-x)))-1, ...
x, 2*((x-min(x))./range(x))-1 )
legend({'hard' 'erf' 'tanh' 'normcdf' 'logit' 'linear'})
Trivially you could simply threshold, but I imagine this is too dumb:
return i < 0.5 ? 0.0 : 1.0;
Since you mention 'increasing contrast' I assume the input values are luminance values. If so, and they are discrete (perhaps it's an 8-bit value), you could use a lookup table to do this quite quickly.
Your 'mulContrastize' looks reasonably quick. One optimization would be to use integer math. Let's say, again, your input values could actually be passed as an 8-bit unsigned value in [0..255]. (Again, possibly a fine assumption?) You could do something roughly like...
int mulContrastize(int i) {
if (i < 128) return (i * i) >> 7;
// The shift is really: * 2 / 256
i = 255 - i;
return 255 - ((i * i) >> 7);
A piecewise interpolation can be fast and flexible. It requires only a few decisions followed by a multiplication and addition, and can approximate any curve. It also avoids the courseness that can be introduced by lookup tables (or the additional cost in two lookups followed by an interpolation to smooth this out), though the lut might work perfectly fine for your case.
With just a few segments, you can get a pretty good match. Here there will be courseness in the color gradients, which will be much harder to detect than courseness in the absolute colors.
As Eamon Nerbonne points out in the comments, segmentation can be optimized by "choos[ing] your segmentation points based on something like the second derivative to maximize detail", that is, where the slope is changing the most. Clearly, in my posted example, having three segments in the middle of the five segment case doesn't add much more detail.