What is the most optimized way of creating a ray tracer? - optimization

Currently, I am working with a ray tracer that takes an iterative approach towards developing the scenes. My goal is to turn it into a recursive ray tracer.
At the moment, I have a ray tracer defined to do the following operation to create the bitmap it is stored in:
int WIDTH = 640;
int HEIGHT = 640;
BMP Image(WIDTH, HEIGHT); // create new bitmap
// Slightly shoot rays left of right camera direction
double xAMT, yAMT;
*/
Color blue(0.1, 0.61, 0.76, 0);
for (int x = 0; x < WIDTH; x++) {
for (int y = 0; y < HEIGHT; y++) {
if (WIDTH > HEIGHT) {
xAMT = ((x + 0.5) / WIDTH) * aspectRatio - (((WIDTH - HEIGHT) / (double)HEIGHT) / 2);
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
else if (HEIGHT > WIDTH) {
xAMT = (x + 0.5) / WIDTH;
yAMT = (((HEIGHT - y) + 0.5) / HEIGHT) / aspectRatio - (((HEIGHT - WIDTH) / (double)WIDTH) / 2);
}
else {
xAMT = (x + 0.5) / WIDTH;
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
..... // calculate intersections, shading, reflectiveness.... etc
Image.setPixel(x, y, blue); // this is here just as an example
}
}
Is there another approach to calculating the reflective and refractive child rays outside the double for-loop?
Are the for-loops necessary? // yes because of the bitmap?
What approaches can be taken to minimize/optimize an iterative ray tracer?

Related

How to make a 2d shader working with ParallaxBackground node in Godot?

In my game I want to make a scrolling background with moving stars. I am using ParallaxBackground node with ParallaxLayer as a child, and the later has TextureRect child that display a 2d shader for the stars.
Nodes hierarchy:
ParallaxBackground -> StarsLayer -> Stars
Stars is the TextureRect and its rect_size equals the project window size.
Here is the 2d shader that it uses:
shader_type canvas_item;
uniform vec4 bg_color: hint_color;
float rand(vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233))) * 43758.5453123);
}
void fragment() {
float size = 100.0;
float prob = 0.9;
vec2 pos = floor(1.0 / size * FRAGCOORD.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(TIME * 8.0 + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(FRAGCOORD.xy, center) / (0.5 * size);
color = color * t / (abs(FRAGCOORD.y - center.y)) * t / (abs(FRAGCOORD.x - center.x));
}
else if (rand(SCREEN_UV.xy / 20.0) > 0.996)
{
float r = rand(SCREEN_UV.xy);
color = r * (0.85 * sin(TIME * (r * 5.0) + 720.0 * r) + 0.95);
}
COLOR = vec4(vec3(color),1.0) + bg_color;
}
Here is ParallaxBackground script:
extends ParallaxBackground
onready var stars_layer = $StarsLayer
var bg_offset = 0.0
func _ready():
stars_layer.motion_mirroring = Vector2(0, Helpers.WINDOW_SIZE.y)
func _process(delta):
bg_offset += 30 * delta
scroll_offset = Vector2(0, bg_offset)
The problem is that the stars are being showed but not moving at all.
Use motion_offset instead of scroll_offset
func _process(delta):
motion_offset += 30 * delta

Clamp Camera to Map (Zoom issue)

I'm creating a game in LibGDX and am using Tiled as my map system.
I'm trying to contain an OrthographicCamera within the bounds of my TiledMap. I use MathUtils.clamp to achieve this. When the camera is at a normal zoom of 1.0f, it works perfectly. However when the camera is zoomed in further, to lets say .75f, the camera is clamped to the wrong location because it has no information of the zoom value.
position.x = MathUtils.clamp(position.x * (gameScreen.gameCamera.camera.zoom), gameScreen.gameCamera.camera.viewportWidth / 2, gameScreen.mapHandler.mapPixelWidth - (gameScreen.gameCamera.camera.viewportWidth / 2));
position.y = MathUtils.clamp(position.y * (gameScreen.gameCamera.camera.zoom), (gameScreen.gameCamera.camera.viewportHeight / 2), gameScreen.mapHandler.mapPixelHeight - (gameScreen.gameCamera.camera.viewportHeight / 2));
My question: How do I include the zoom value in my clamp code so the camera is correctly clamped? Any ideas?
Thank you!
- Jake
You should multiply by zoom the world size, not the camera position:
float worldWidth = gameScreen.mapHandler.mapPixelWidth;
float worldHeight = gameScreen.mapHandler.mapPixelHeight;
float zoom = gameScreen.gameCamera.camera.zoom;
float zoomedHalfWorldWidth = zoom * gameScreen.gameCamera.camera.viewportWidth / 2;
float zoomedHalfWorldHeight = zoom * gameScreen.gameCamera.camera.viewportHeight / 2;
//min and max values for camera's x coordinate
float minX = zoomedHalfWorldWidth;
float maxX = worldWidth - zoomedHalfWorldWidth;
//min and max values for camera's y coordinate
float minY = zoomedHalfWorldHeight;
float maxY = worldHeight - zoomedHalfWorldHeight;
position.x = MathUtils.clamp(position.x, minX, maxX);
position.y = MathUtils.clamp(position.y, minY, maxY);
Note, that if a visible area can be smaller than the world size, then you must handle such situations differently:
if (maxX <= minX) {
//visible area width is bigger than the worldWidth -> set the camera at the world centerX
position.x = worldWidth / 2;
} else {
position.x = MathUtils.clamp(position.x, minX, maxX);
}
if (maxY <= minY) {
//visible area height is bigger than the worldHeight -> set the camera at the world centerY
position.y = worldHeight / 2;
} else {
position.y = MathUtils.clamp(position.y, minY, maxY);
}

Collision between a circle and a rectangle

I have a problem with collision detection of a circle and a rectangle. I have tried to solve the problem with the Pythagorean Theorem. But none of the queries works. The rectangle collides with the rectangular bounding box of the circle.
if (CGRectIntersectsRect(player.frame, visibleEnemy.frame)) {
if (([visibleEnemy spriteTyp] == jumper || [visibleEnemy spriteTyp] == wobble )) {
if ((visibleEnemy.center.x - player.frame.origin.x) * (visibleEnemy.center.x - player.frame.origin.x) +
(visibleEnemy.center.y - player.frame.origin.y) * (visibleEnemy.center.y - player.frame.origin.y) <=
(visibleEnemy.bounds.size.width/2 * visibleEnemy.bounds.size.width/2)) {
NSLog(#"Check 1");
normalAction = NO;
}
if ((visibleEnemy.center.x - (player.frame.origin.x + player.bounds.size.width)) *
(visibleEnemy.center.x - (player.frame.origin.x + player.bounds.size.width)) +
(visibleEnemy.center.y - player.frame.origin.y) * (visibleEnemy.center.y - player.frame.origin.y) <=
(visibleEnemy.bounds.size.width/2 * visibleEnemy.bounds.size.width/2)) {
NSLog(#"Check 2");
normalAction = NO;
}
else {
NSLog(#"Check 3");
normalAction = NO;
}
}
}
Here is how I did it in one of my small gaming projects. It gave me best results and it's simple. My code detects if there is a collision between circle and the line. So you can easily adopt it to circle - rectangle collision detection by checking all 4 edges of the rectangle.
Let's say that a ball has a ballRadius, and location (xBall, yBall). The line is defined with two points (xStart, yStart) and (xEnd, yEnd).
Implementation of a simple collision detection:
float ballRadius = ...;
float x1 = xStart - xBall;
float y1 = yStart - yBall;
float x2 = xEnd - xBall;
float y2 = yEnd - yBall;
float dx = x2 - x1;
float dy = y2 - y1;
float dr = sqrtf(powf(dx, 2) + powf(dy, 2));
float D = x1*y2 - x2*y1;
float delta = powf(ballRadius*0.9,2)*powf(dr,2) - powf(D,2);
if (delta >= 0)
{
// Collision detected
}
If delta is greater than zero there are two intersections between ball (circle) and line. If delta is equal to zero there is one intersection – perfect collision.
I hope it will help you.

Angle between two lines is wrong

I want to get angles between two line.
So I used this code.
int posX = (ScreenWidth) >> 1;
int posY = (ScreenHeight) >> 1;
double radians, degrees;
radians = atan2f( y - posY , x - posX);
degrees = -CC_RADIANS_TO_DEGREES(radians);
NSLog(#"%f %f",degrees,radians);
But it doesn't work .
The Log is that: 146.309935 -2.553590
What's the matter?
I can't know the reason.
Please help me.
If you simply use
radians = atan2f( y - posY , x - posX);
you'll get the angle with the horizontal line y=posY (blue angle).
You'll need to add M_PI_2 to your radians value to get the correct result.
Here's a function I use. It works great for me...
float cartesianAngle(float x, float y) {
float a = atanf(y / (x ? x : 0.0000001));
if (x > 0 && y > 0) a += 0;
else if (x < 0 && y > 0) a += M_PI;
else if (x < 0 && y < 0) a += M_PI;
else if (x > 0 && y < 0) a += M_PI * 2;
return a;
}
EDIT: After some research I found out you can just use atan2(y,x). Most compiler libraries have this function. You can ignore my function above.
If you have 3 points and want to calculate an angle between them here is a quick and correct way of calculating the right angle value:
double AngleBetweenThreePoints(CGPoint pointA, CGPoint pointB, CGPoint pointC)
{
CGFloat a = pointB.x - pointA.x;
CGFloat b = pointB.y - pointA.y;
CGFloat c = pointB.x - pointC.x;
CGFloat d = pointB.y - pointC.y;
CGFloat atanA = atan2(a, b);
CGFloat atanB = atan2(c, d);
return atanB - atanA;
}
This will work for you if you specify point on one of the lines, intersection point and point on the other line.

how to zoom mandelbrot set

I have successfully implemented the mandelbrot set as described in the wikipedia article, but I do not know how to zoom into a specific section. This is the code I am using:
+(void)createSetWithWidth:(int)width Height:(int)height Thing:(void(^)(int, int, int, int))thing
{
for (int i = 0; i < height; ++i)
for (int j = 0; j < width; ++j)
{
double x0 = ((4.0f * (i - (height / 2))) / (height)) - 0.0f;
double y0 = ((4.0f * (j - (width / 2))) / (width)) + 0.0f;
double x = 0.0f;
double y = 0.0f;
int iteration = 0;
int max_iteration = 15;
while ((((x * x) + (y * y)) <= 4.0f) && (iteration < max_iteration))
{
double xtemp = ((x * x) - (y * y)) + x0;
y = ((2.0f * x) * y) + y0;
x = xtemp;
iteration += 1;
}
thing(j, i, iteration, max_iteration);
}
}
It was my understanding that x0 should be in the range -2.5 - 1 and y0 should be in the range -1 - 1, and that reducing that number would zoom, but that didnt really work at all. How can I zoom?
Suppose the center is the (cx, cy) and the length you want to display is (lx, ly), you can use the following scaling formula:
x0 = cx + (i/width - 0.5)*lx;
y0 = cy + (j/width - 0.5)*ly;
What it does is to first scale down the pixel to the unit interval (0 <= i/width < 1), then shift the center (-0.5 <= i/width-0.5 < 0.5), scale up to your desired dimension (-0.5*lx <= (i/width-0.5)*lx < 0.5*lx). Finally, shift it to the center you given.
first off, with a max_iteration of 15, you're not going to see much detail. mine has 1000 iterations per point as a baseline, and can go to about 8000 iterations before it really gets too slow to wait for.
this might help: http://jc.unternet.net/src/java/com/jcomeau/Mandelbrot.java
this too: http://www.wikihow.com/Plot-the-Mandelbrot-Set-By-Hand