How wrong is my idea for calculating 2D coordinates from 3D space coordinates data - rendering

I've been trying to write a program in Processing 3, which will display points from 3D space onto the viewport. I won't be showing any code for now because the point of this question is if there are any disadvantages in my method for displaying space in perspective.
The idea behind my formula for calculating X and Y screen coordinates for a point in 3D space is to compare alpha and beta angles (angles on the picture, values don't matter right now).
Since the 0,0 coordinates on the screen starts in the upper-left corner, here's what I came up for X and Y:
where:
γ is the angle between the right edge of the FOV and CP OR just α+β
β is the Field of View half angle
α is the angle between the camera's forward vector and CP vector.
𝛿 is the same as γ but on a XZ plane (looking up and down)
This way, when:
α = 0, the X is width and that puts the point at the right edge of the screen
α = β, the X is the middle of the screen (width/2), and
α = 2β, the X is 0 and that's the left edge of the screen.
I did calculate angles using acos() and asin() functions and they were correct.
The first problem was the α angle; it never was negative, meaning that γ would never go below β since
β+α = β+0.
I managed to fix it by multiplying α with a cross product of CP and direction (on the image) vectors to determine if α is positive or negative.
At first I though I fixed the problem but as soon as I rotated the camera, the point on the screen would go to the right only to a certain value and then bounce back in the opposite direction (as if the cross product didn't work anymore or something).
Do you have any idea if there are any downsides in my formulas?
Thanks for help!

Related

Problem drawing a rectangle in Godot fragment shader

I'm having a fragment shader that draw some stuff. On top of that I want it to draw 1-pixel thick rectangle around the fragment. I have using step function, but the problem is the UV coordinates that is between 0.0 -1.0. How do I know when the fragment is at a specific pixel? For this I want to draw on the edges.
c.r = step(0.99, UV.x);
c.r += step(0.99, 1.0-UV.x);
c.r += step(0.99, UV.y);
c.r += step(0.99, 1.0-UV.y);
The code above just draw a rectangle, but the problem thicknes is 0.01% of total width/hight.
Is there any good description of UX, FRAGCOORD, SCREEN_TEXTURE and SCREEN_UV?
If it is good enough for you to work in screen coordinates (i.e., you want to define position and thickness in terms of screen space) you can use FRAGCOORD. It corresponds to the (x, y) pixel coordinates within the viewport, i.e., with the default viewport of 1024 x 600, the lower left pixel would be (0, 0), and the top right would be (1024, 600).
If you want to map the fragment coordinates back to world space (i.e., you want to define position and thickness in terms of world space), you must follow the work-around mentioned here.

How to find the point of collision between an irregular shape (built out of 3 circles) and a line

I'm making a program in which many weird shapes are drawn onto a canvas. Right now i'm trying to implement the last, and possebly hardest, one.
In this particular shape i need a way to find the location (on a 2d canvas) where the line hits the shape. The following image is an example of what i have right now.
The black dots are the points that a known to me (i also have the location of the center of the three open circles and the radius of these circles). Each of the three outer lines needs a line towards the center dot, ending at the point that it hits the circle. This shape can be turned 90, 180 or 270 degrees.
The shape should look something like the following:
If you need any other information, please ask me in the comments. I'm not very good at math so please be gentle, thanks!
If A and B are points forming a line, then you can describe any point on that line using coordinates:
x = t·Ax + (1−t)·Bx
y = t·Ay + (1−t)·By
0 ≤ t ≤ 1
You can also describe the circle with center M and radius r as
(x − Mx)2 + (y − My)2 = r2
So take the x and y from the equations of the line, and plug them into the equation of the circle. You obtain a quadratic equation in t. Its two solutions describe the two points of intersection between the line and circle. In your example, only one of them lies on the line segment, i.e. satisfies 0 ≤ t ≤ 1. The other describes a point on the extension of the segment past its endpoint. Take the correct value for t back to the equations of the line, and you obtain the x and y coordinates of the point of intersection.
If you don't know up front which circle you want to intersect with a given line, then intersect all three and choose the most appropriate point afterwards. Probably that is the point closest to the outside starting point of the line segment. The same goes in cases where both points of intersection lie on the segment.

Reflecting a circle off another circle

Working with iPhone and Objective C.
I am working on a game and I need to correctly reflect a ball off a circle object. I am trying to do it as a line and circle intersection. I have my ball position outside the circle and I have the new ball position that would be inside the circle at the next draw update. I know the intersect point of the line (ball path) and the circle. Now I want to rotate the ending point of the ball path about the intersection point to get the correct angle of reflection off the tangent.
The following are known:
ball current x,y
ball end x,y
ball radius
circle center x,y
circle radius
intersection point of ball path and circle x and y
I know I need to find the angle of incidence between the tangent line and the incoming ball path which will also equal my angle of reflection. I think once I know those two angles I can subtract them from 180 to get my rotation angle then rotate my end point about the angle of intersection by that amount. I just don't know how.
First, you should note that the center of the ball doesn't have to be inside of the circle to indicate that there's a reflection or bounce. As long as the distance between ball center and circle is less than the radius of the ball, there will be a bounce.
If the radius of the circle is R and the radius of the ball is r, things are simplified if you convert to the case where the circle has radius R+r and the ball has radius 0. For the purposes of collision detection and reflection/bouncing, this is equivalent.
If you have the point of intersection between the (enlarged) circle and the ball's path, you can easily compute the normal N to the circle at that point (it is the unit vector in the direction from the center of the circle to the collision point).
For an incoming vector V the reflected vector is V-2(N⋅V) N, where (N⋅V) is the dot product. For this problem, the incoming vector V is the vector from the intersection point to the point inside the circle.
As for the reflection formula given above, it is relatively easy to derive using vector math, but you can also Google search terms like "calculate reflection vector". The signs in the formula will vary with the assumed directions of V and N. Mathworld has a derivation although, as noted, the signs are different.
I only know the solution to the geometry part.
Let:
r1 => Radius of ball
r2 => Radius of circle
You can calculate the distance between the two circles by using Pythagoras theorem.
If the distance is less than the r1+r2 then do the collision.
For the collision,I would refer you Here. It's in python but I think it should give you an idea of what to do. Hopefully, even implement it in Objective C. The Tutorial By PeterCollingRidge.

Calculating collision for a moving circle, without overlapping the boundaries

Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't.
What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary.
This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists.
Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?
Since you just have a circle and a rectangle, it's actually pretty simple. A circle of radius r bouncing around inside a rectangle of dimensions w, h can be treated the same as a point p at the circle's center, inside a rectangle (w-r), (h-r).
Now position update becomes simple. Given your point at position x, y and a per-frame velocity of dx, dy, the updated position is x+dx, y+dy - except when you cross a boundary. If, say, you end up with x+dx > W (letting W = w-r), then you do the following:
crossover = (x+dx) - W // this is how far "past" the edge your ball went
x = W - crossover // so you bring it back the same amount on the correct side
dx = -dx // and flip the velocity to the opposite direction
And similarly for y. You'll have to set up a similar (reflected) check for the opposite boundaries in each dimension.
At each step, you can calculate the projected/expected position of the circle for the next frame.
If this lies outside the rectangle, then you can then use the distance from the old circle position to the rectangle's edge and the amount "past" the rectangle's edge that the next position lies at (the interpenetration) to linearly interpolate and determine the precise time when the circle "hits" the rectangle edge.
For example, if the circle is 10 pixels away from the rectangle's edge, then is predicted to move to 5 pixels beyond it, you know that for 2/3rds of the timestep (10/15ths) it moves on its orginal path, then is reflected and continues on its new path for the remaining 1/3rd of the timestep (5/15ths). By calculating these two parts of the motion and "adding" the translations together, you can find the correct new position.
(Of course, it gets more complicated if you hit near a corner, as there may be several collisions during the timestep, off different edges. And if you have more than one circle moving, things get a lot more complex. But that's where you can start for the case you've asked about)
Reflection across a rectangular boundary is incredibly simple. Just take the amount that the object passed the boundary and subtract it from the boundary position. If the position without reflecting would be (-0.8,-0.2) for example and the upper left corner is at (0,0), the reflected position would be (0.8,0.2).

Detecting Special touch on the iphone

I was asking myself if there are examples online which covers how you can for instance detect shapes in touch gestures.
for example a rectangle or a circle (or more complex a heart .. )
or determine the speed of swiping (over time ( like i'm swiping my iphone against 50mph ))
For very simple gestures (horizontal vs. vertical swipe), calculate the difference in x and y between two touches.
dy = abs(y2 - y1)
dx = abs(x2 - x1)
f = dy/dx
An f close to zero is a horizontal swipe. An f close to 1 is a diagonal swipe. And a very large f is a vertical swipe (keep in mind that dx could be zero, so the above won't yield valid results for all x and y).
If you're interested in speed, pythagoras can help. The length of the distance travelled between two touches is:
l = sqrt(dx*dx + dy*dy)
If the touches happened at times t1 and t2, the speed is:
tdiff = abs(t2 - t1)
s = l/tdiff
It's up to you to determine which value of s you interpret as fast or slow.
You can extend this approach for more complex figures, e.g. your square shape could be a horizontal/vertical/horizontal/vertical swipe with start/end points where the previous swipe stopped.
For more complex figures, it's probably better to work with an idealized shape. One could consider a polygon shape as the ideal, and check if a range of touches
don't have too high a distance to their closest point on the pologyon's outline, and
all touches follow the same direction along the polygon's outline.
You can refine things further from there.
There does exist other methods for detecting non-simple touches on a touchscreen. Check out the $1 unistroke gesture recognizer at the University of Washington. http://depts.washington.edu/aimgroup/proj/dollar/
It basically works like this:
Resample the recorded path into a fixed number of points that are evenly spaced along the path
Rotating the path so that the first point is directly to the right of the path’s center of mass
Scaling the path (non-uniformly) to a fixed height and width
For each reference path, calculating the average distance for the corresponding points in the input path. The path with the lowest average point distance is the match.
What’s great is that the output of steps 1-3 is a reference path that can be added to the array of known gestures. This makes it extremely easy to give your application gesture support and create your own set of custom gestures, as you see fit.
This has been ported to iOS by Adam Preble, repo on github:
http://github.com/preble/GLGestureRecognizer