how to zoom mandelbrot set - objective-c

I have successfully implemented the mandelbrot set as described in the wikipedia article, but I do not know how to zoom into a specific section. This is the code I am using:
+(void)createSetWithWidth:(int)width Height:(int)height Thing:(void(^)(int, int, int, int))thing
{
for (int i = 0; i < height; ++i)
for (int j = 0; j < width; ++j)
{
double x0 = ((4.0f * (i - (height / 2))) / (height)) - 0.0f;
double y0 = ((4.0f * (j - (width / 2))) / (width)) + 0.0f;
double x = 0.0f;
double y = 0.0f;
int iteration = 0;
int max_iteration = 15;
while ((((x * x) + (y * y)) <= 4.0f) && (iteration < max_iteration))
{
double xtemp = ((x * x) - (y * y)) + x0;
y = ((2.0f * x) * y) + y0;
x = xtemp;
iteration += 1;
}
thing(j, i, iteration, max_iteration);
}
}
It was my understanding that x0 should be in the range -2.5 - 1 and y0 should be in the range -1 - 1, and that reducing that number would zoom, but that didnt really work at all. How can I zoom?

Suppose the center is the (cx, cy) and the length you want to display is (lx, ly), you can use the following scaling formula:
x0 = cx + (i/width - 0.5)*lx;
y0 = cy + (j/width - 0.5)*ly;
What it does is to first scale down the pixel to the unit interval (0 <= i/width < 1), then shift the center (-0.5 <= i/width-0.5 < 0.5), scale up to your desired dimension (-0.5*lx <= (i/width-0.5)*lx < 0.5*lx). Finally, shift it to the center you given.

first off, with a max_iteration of 15, you're not going to see much detail. mine has 1000 iterations per point as a baseline, and can go to about 8000 iterations before it really gets too slow to wait for.
this might help: http://jc.unternet.net/src/java/com/jcomeau/Mandelbrot.java
this too: http://www.wikihow.com/Plot-the-Mandelbrot-Set-By-Hand

Related

How do I get the complexity of bilinear/nearest neighbour interpolation algorithm? (calculate the big O)

I want to calculate the big O of the following algorithms for resizing binary images:
Bilinear interpolation:
double scale_x = (double)new_height/(height-1);
double scale_y = (double)new_width/(width-1);
for (int i = 0; i < new_height; i++)
{
int ii = i / scale_x;
for (int j = 0; j < new_width; j++)
{
int jj = j / scale_y;
double v00 = matrix[ii][jj], v01 = matrix[ii][jj + 1],
v10 = matrix[ii + 1][jj], v11 = matrix[ii + 1][jj + 1];
double fi = i / scale_x - ii, fj = j / scale_y - jj;
double temp = (1 - fi) * ((1 - fj) * v00 + fj * v01) +
fi * ((1 - fj) * v10 + fj * v11);
if (temp >= 0.5)
result[i][j] = 1;
else
result[i][j] = 0;
}
}
Nearest neighbour interpolation
double scale_x = (double)height/new_height;
double scale_y = (double)width/new_width;
for (int i = 0; i < new_height; i++)
{
int srcx = floor(i * scale_x);
for (int j = 0; j < new_width; j++)
{
int srcy = floor(j * scale_y);
result[i][j] = matrix[srcx][srcy];
}
}
I assumed that the complexity of both of them is the loop dimensions, i.e O(new_height*new_width). However, the bilinear interpolation surely works much slower than the nearest neighbour. Could you please explain how to correctly compute complexity?
They are both running in Theta(new_height*new_width) time because except for the loop iterations all operations are constant time.
This doesn't in any way imply that the two programs will execute equally fast. It merely means that if you increase new_height and/or new_width to infinity, the ratio of execution time between the two programs will neither go to infinity nor to zero.
(This is making the assumption that the integer types are unbounded and that all arithmetic operations are constant time operations independent of the length of the operands. Otherwise there will be another relevant factor accounting for the cost of the arithmetic.)

how to do 3d sum using openmp

I am a freshman in openmp. I have some trouble in a 3d sum, and I don't know how to improve my code. Here's the code I want to improve in openmp. My aim is to speed up the calculation of this 3d sum. What should I add in my code according to the rules of openmp?
I add #pragma omp parallel for reduction(+:integral) in my code. But an error happens which says the initialization of 'for' is not correct. This is the information of this error:enter image description here I am a chinese, so the language of my IDE is chinese. I use Visual Studio 2019.
#include<omp.h>
#include<stdio.h>
#include<math.h>
int main()
{
double a = 0.3291;
double d_title = 2.414;
double b = 3.8037;
double c = 4086;
double nu_start = 0;
double mu_start = 0;
double z_start = 0;
double step_nu = 2 * 3.1415926 / 100;
double step_mu = 3.1415926 / 100;
double step_z = 0;
double nu = 0;
double mu = 0;
double z = 0;
double integral=0;
double d_uv = 0;
int i = 0;
int j = 0;
int k = 0;
#pragma omp parallel for default(none) shared(a, d_title, b, c, nu_start, mu_start, z_start, step_nu, step_mu) private( j,k,mu, nu, step_z, z, d_uv) reduction(+:integral)
for (i = 0; i < 100; i++)
{
mu = mu_start + (i + 1) * step_mu;
for (j = 0; j < 100; j++)
{
nu = nu_start + (j + 1) * step_nu;
for (k = 0; k < 500; k++)
{
d_uv = (sin(mu) * sin(mu) * cos(nu) * cos(nu) + sin(mu) * sin(mu) * (a * sin(nu) - d_title * cos(nu)) * (a * sin(nu) - d_title * cos(nu)) + b * b * cos(mu) * cos(mu)) / (c * c);
step_z = 20 / (d_uv * 500);
z = z_start + (k + 1) * step_z;
integral = integral + sin(mu) * (1 - 3 * sin(mu) * sin(mu) * cos(nu) * cos(nu)) * exp(-d_uv * z) * log(1 + z * z) * step_z * step_mu * step_nu;
}
}
}
double out = 0;
out = integral / (c * c);
return 0;
}
Solutions (UPDATE: It is an answer to the original question:)
To do the least typing you just have to add the following line before for(int i=..)
#pragma omp parallel for private( mu, nu, step_z, z, d_uv) reduction(+:integral)
Here you define which variables have to be private to avoid data race. Note that variables are shared by default, so variable integral also shared, but all threads update its value, which is a data race. To avoid it, you have 2 possibilities: use atomic operation, or a much better option is to use use reduction (add reduction(+:integral) clause).
As you mentioned that you are beginner in OpenMP it is recommended to use default(none) clause in the #pragma omp parallel for directive, so you have to explicitly define sharing attributes. If you forget a variable you will get an error, so you have to consider all variables involved in your parallel region and can think about possible data races:
#pragma omp parallel for default(none) shared(a, d_title, b, c, nu_start, mu_start, z_start, step_nu, step_mu) private( mu, nu, step_z, z, d_uv) reduction(+:integral)
Generally, it is recommended to define your variables in their minimum required scope, so variables defined inside the for loop to parallelize will be private. In this case you just have to add #pragma omp parallel for reduction(+:integral) before your outermost for loop, so your code will be:
#pragma omp parallel for reduction(+:integral)
for (int i = 0; i < 100; i++)
{
double mu = mu_start + (i + 1) * step_mu;
for (int j = 0; j < 100; j++)
{
//int id = omp_get_thread_num();
double nu = nu_start + (j + 1) * step_nu;
for (int k = 0; k < 500; k++)
{
double d_uv = (sin(mu) * sin(mu) * cos(nu) * cos(nu) + sin(mu) * sin(mu) * (a * sin(nu) - d_title * cos(nu)) * (a * sin(nu) - d_title * cos(nu)) + b * b * cos(mu) * cos(mu)) / (c * c);
double step_z = 20 / (d_uv * 500);
double z = z_start + (k + 1) * step_z;
//int id = omp_get_thread_num();
integral = integral + sin(mu) * (1 - 3 * sin(mu) * sin(mu) * cos(nu) * cos(nu)) * exp(-d_uv * z) * log(1 + z * z) * step_z * step_mu * step_nu;
}
}
}
Runtimes: 44 ms (1 thread) and 11 ms (4 threads) on my computer (g++ -O3 -mavx2 -fopenmp).

Look-at quaternion using up vector

I have a camera (in a custom 3D engine) that accepts a quaternion for the rotation transform. I have two 3D points representing a camera and an object to look at. I want to calculate the quaternion that looks from the camera to the object, while respecting the world up axis.
This question asks for the same thing without the "up" vector. All three answers result in the camera pointing in the correct direction, but rolling (as in yaw/pitch/roll; imagine leaning your head onto your ear while looking at something).
I can calculate an orthonormal basis of vectors that match the desired coordinate system by:
lookAt = normalize(target - camera)
sideaxis = cross(lookAt, worldUp)
rotatedup = cross(sideaxis, lookAt)
How can I create a quaternion from those three vectors? This question asks for the same thing...but unfortunately the only and accepted answer says ~"let's assume you don't care about roll", and then goes about ignoring the up axis. I do care about roll. I don't want to ignore the up axis.
A previous answer has given a valid solution using angles. This answer will present an alternative method.
The orthonormal basis vectors, renaming them F = lookAt, R = sideaxis, U = rotatedup, directly form the columns of the 3x3 rotation matrix which is equivalent to your desired quaternion:
Multiplication with a vector is equivalent to using said vector's components as the coordinates in the camera's basis.
A 3x3 rotation matrix can be converted into a quaternion without conversion to angles / use of costly trigonometric functions. Below is a numerically stable C++ snippet which does this, returning a normalized quaternion:
inline void CalculateRotation( Quaternion& q ) const {
float trace = a[0][0] + a[1][1] + a[2][2];
if( trace > 0 ) {
float s = 0.5f / sqrtf(trace + 1.0f);
q.w = 0.25f / s;
q.x = ( a[2][1] - a[1][2] ) * s;
q.y = ( a[0][2] - a[2][0] ) * s;
q.z = ( a[1][0] - a[0][1] ) * s;
} else {
if ( a[0][0] > a[1][1] && a[0][0] > a[2][2] ) {
float s = 2.0f * sqrtf( 1.0f + a[0][0] - a[1][1] - a[2][2]);
q.w = (a[2][1] - a[1][2] ) / s;
q.x = 0.25f * s;
q.y = (a[0][1] + a[1][0] ) / s;
q.z = (a[0][2] + a[2][0] ) / s;
} else if (a[1][1] > a[2][2]) {
float s = 2.0f * sqrtf( 1.0f + a[1][1] - a[0][0] - a[2][2]);
q.w = (a[0][2] - a[2][0] ) / s;
q.x = (a[0][1] + a[1][0] ) / s;
q.y = 0.25f * s;
q.z = (a[1][2] + a[2][1] ) / s;
} else {
float s = 2.0f * sqrtf( 1.0f + a[2][2] - a[0][0] - a[1][1] );
q.w = (a[1][0] - a[0][1] ) / s;
q.x = (a[0][2] + a[2][0] ) / s;
q.y = (a[1][2] + a[2][1] ) / s;
q.z = 0.25f * s;
}
}
}
Source: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion
Converting this to suit your situation is of course just a matter of swapping the matrix elements with the corresponding vector components:
// your code from before
F = normalize(target - camera); // lookAt
R = normalize(cross(F, worldUp)); // sideaxis
U = cross(R, F); // rotatedup
// note that R needed to be re-normalized
// since F and worldUp are not necessary perpendicular
// so must remove the sin(angle) factor of the cross-product
// same not true for U because dot(R, F) = 0
// adapted source
Quaternion q;
double trace = R.x + U.y + F.z;
if (trace > 0.0) {
double s = 0.5 / sqrt(trace + 1.0);
q.w = 0.25 / s;
q.x = (U.z - F.y) * s;
q.y = (F.x - R.z) * s;
q.z = (R.y - U.x) * s;
} else {
if (R.x > U.y && R.x > F.z) {
double s = 2.0 * sqrt(1.0 + R.x - U.y - F.z);
q.w = (U.z - F.y) / s;
q.x = 0.25 * s;
q.y = (U.x + R.y) / s;
q.z = (F.x + R.z) / s;
} else if (U.y > F.z) {
double s = 2.0 * sqrt(1.0 + U.y - R.x - F.z);
q.w = (F.x - R.z) / s;
q.x = (U.x + R.y) / s;
q.y = 0.25 * s;
q.z = (F.y + U.z) / s;
} else {
double s = 2.0 * sqrt(1.0 + F.z - R.x - U.y);
q.w = (R.y - U.x) / s;
q.x = (F.x + R.z) / s;
q.y = (F.y + U.z) / s;
q.z = 0.25 * s;
}
}
(And needless to say swap y and z if you're using OpenGL.)
Assume you initially have three ortonormal vectors: worldUp, worldFront and worldSide, and lets use your equations for lookAt, sideAxis and rotatedUp. The worldSide vector will not be necessary to achieve the result.
Break the operation in two. First, rotate around worldUp. Then rotate around sideAxis, which will now actually be parallel to the rotated worldSide.
Axis1 = worldUp
Angle1 = (see below)
Axis2 = cross(lookAt, worldUp) = sideAxis
Angle2 = (see below)
Each of these rotations correspond to a quaternion using:
Q = cos(Angle/2) + i * Axis_x * sin(Angle/2) + j * Axis_y * sin(Angle/2) + k * Axis_z * sin(Angle/2)
Multiply both Q1 and Q2 and you get the desired quaternion.
Details for the angles:
Let P(worldUp) be the projection matrix on the worldUp direction, i.e., P(worldUp).v = cos(worldUp,v).worldUp or using kets and bras, P(worldUp) = |worldUp >< worldUp|. Let I be the identity matrix.
Project lookAt in the plane perpendicular to worldUp and normalize it.
tmp1 = (I - P(worldUp)).lookAt
n1 = normalize(tmp1)
Angle1 = arccos(dot(worldFront,n1))
Angle2 = arccos(dot(lookAt,n1))
EDIT1:
Notice that there is no need to compute transcendental functions. Since the dot product of a pair of normalized vectors is the cosine of an angle and assuming that cos(t) = x, we have the trigonometric identities:
cos(t/2) = sqrt((1 + x)/2)
sin(t/2) = sqrt((1 - x)/2)
If somebody search for C# version with handling every matrix edge cases (not input edge cases!), here it is:
public static SoftQuaternion LookRotation(SoftVector3 forward, SoftVector3 up)
{
forward = SoftVector3.Normalize(forward);
// First matrix column
SoftVector3 sideAxis = SoftVector3.Normalize(SoftVector3.Cross(up, forward));
// Second matrix column
SoftVector3 rotatedUp = SoftVector3.Cross(forward, sideAxis);
// Third matrix column
SoftVector3 lookAt = forward;
// Sums of matrix main diagonal elements
SoftFloat trace1 = SoftFloat.One + sideAxis.X - rotatedUp.Y - lookAt.Z;
SoftFloat trace2 = SoftFloat.One - sideAxis.X + rotatedUp.Y - lookAt.Z;
SoftFloat trace3 = SoftFloat.One - sideAxis.X - rotatedUp.Y + lookAt.Z;
// If orthonormal vectors forms identity matrix, then return identity rotation
if (trace1 + trace2 + trace3 < SoftMath.CalculationsEpsilon)
{
return Identity;
}
// Choose largest diagonal
if (trace1 + SoftMath.CalculationsEpsilon > trace2 && trace1 + SoftMath.CalculationsEpsilon > trace3)
{
SoftFloat s = SoftMath.Sqrt(trace1) * (SoftFloat)2.0f;
return new SoftQuaternion(
(SoftFloat)0.25f * s,
(rotatedUp.X + sideAxis.Y) / s,
(lookAt.X + sideAxis.Z) / s,
(rotatedUp.Z - lookAt.Y) / s);
}
else if (trace2 + SoftMath.CalculationsEpsilon > trace1 && trace2 + SoftMath.CalculationsEpsilon > trace3)
{
SoftFloat s = SoftMath.Sqrt(trace2) * (SoftFloat)2.0f;
return new SoftQuaternion(
(rotatedUp.X + sideAxis.Y) / s,
(SoftFloat)0.25f * s,
(lookAt.Y + rotatedUp.Z) / s,
(lookAt.X - sideAxis.Z) / s);
}
else
{
SoftFloat s = SoftMath.Sqrt(trace3) * (SoftFloat)2.0f;
return new SoftQuaternion(
(lookAt.X + sideAxis.Z) / s,
(lookAt.Y + rotatedUp.Z) / s,
(SoftFloat)0.25f * s,
(sideAxis.Y - rotatedUp.X) / s);
}
}
This realization based on deeper understanding of this conversation, and was tested for many edge case scenarios.
P.S.
Quaternion's constructor is (x, y, z, w)
SoftFloat is software float type, so you can easyly change it to built-in float if needed
For full edge case safe realization (including input) check this repo.
lookAt
sideaxis
rotatedup
If you normalize this 3 vectors, it is a components of rotation matrix 3x3. So just convert this rotation matrix to quaternion.

What is the most optimized way of creating a ray tracer?

Currently, I am working with a ray tracer that takes an iterative approach towards developing the scenes. My goal is to turn it into a recursive ray tracer.
At the moment, I have a ray tracer defined to do the following operation to create the bitmap it is stored in:
int WIDTH = 640;
int HEIGHT = 640;
BMP Image(WIDTH, HEIGHT); // create new bitmap
// Slightly shoot rays left of right camera direction
double xAMT, yAMT;
*/
Color blue(0.1, 0.61, 0.76, 0);
for (int x = 0; x < WIDTH; x++) {
for (int y = 0; y < HEIGHT; y++) {
if (WIDTH > HEIGHT) {
xAMT = ((x + 0.5) / WIDTH) * aspectRatio - (((WIDTH - HEIGHT) / (double)HEIGHT) / 2);
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
else if (HEIGHT > WIDTH) {
xAMT = (x + 0.5) / WIDTH;
yAMT = (((HEIGHT - y) + 0.5) / HEIGHT) / aspectRatio - (((HEIGHT - WIDTH) / (double)WIDTH) / 2);
}
else {
xAMT = (x + 0.5) / WIDTH;
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
..... // calculate intersections, shading, reflectiveness.... etc
Image.setPixel(x, y, blue); // this is here just as an example
}
}
Is there another approach to calculating the reflective and refractive child rays outside the double for-loop?
Are the for-loops necessary? // yes because of the bitmap?
What approaches can be taken to minimize/optimize an iterative ray tracer?

Angle between two lines is wrong

I want to get angles between two line.
So I used this code.
int posX = (ScreenWidth) >> 1;
int posY = (ScreenHeight) >> 1;
double radians, degrees;
radians = atan2f( y - posY , x - posX);
degrees = -CC_RADIANS_TO_DEGREES(radians);
NSLog(#"%f %f",degrees,radians);
But it doesn't work .
The Log is that: 146.309935 -2.553590
What's the matter?
I can't know the reason.
Please help me.
If you simply use
radians = atan2f( y - posY , x - posX);
you'll get the angle with the horizontal line y=posY (blue angle).
You'll need to add M_PI_2 to your radians value to get the correct result.
Here's a function I use. It works great for me...
float cartesianAngle(float x, float y) {
float a = atanf(y / (x ? x : 0.0000001));
if (x > 0 && y > 0) a += 0;
else if (x < 0 && y > 0) a += M_PI;
else if (x < 0 && y < 0) a += M_PI;
else if (x > 0 && y < 0) a += M_PI * 2;
return a;
}
EDIT: After some research I found out you can just use atan2(y,x). Most compiler libraries have this function. You can ignore my function above.
If you have 3 points and want to calculate an angle between them here is a quick and correct way of calculating the right angle value:
double AngleBetweenThreePoints(CGPoint pointA, CGPoint pointB, CGPoint pointC)
{
CGFloat a = pointB.x - pointA.x;
CGFloat b = pointB.y - pointA.y;
CGFloat c = pointB.x - pointC.x;
CGFloat d = pointB.y - pointC.y;
CGFloat atanA = atan2(a, b);
CGFloat atanB = atan2(c, d);
return atanB - atanA;
}
This will work for you if you specify point on one of the lines, intersection point and point on the other line.