Find the point of a line after which it dissapears - three.js - camera

I want to position x,y,z labels (sprites) on the axis I have in my scene. The problem is that zooming with the camera , should result to also moving the particles analogously so they stay in the side of the "screen".
So I just want to find a way to always know where the lines of x,y,z are out of the camera to update the labels' positions :
fiddle (here they are just static).
the pseudocode of what I might need to acheve that :
function update() {
var pointInLinePosition = calculateLastVisiblePointOfXline();
xSprite.position.set(pointInLinePosition.x, pointInLinePosition.y, pointInLinePosition.z);
}
function calculateLastVisiblePointOfXline(){
}

I found a solution which is satisfying enough (for me at least) but not perfect.
Firstly I create a frustum using the scene's camera :
var frustum = new THREE.Frustum();
frustum.setFromMatrix( new THREE.Matrix4().multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse ) );
Then, I check if any of the planes of the frustum intersects with any of the lines I have in the scene :
for (var i = frustum.planes.length - 1; i >= 0; i--) {
var py = frustum.planes[i].intersectLine( new THREE.Line3( new THREE.Vector3(0,0,0), new THREE.Vector3(1000,0,0) ) ) ;
if(py !== undefined) {
ySprite.position.x = py.x-1 ;
}
var px = frustum.planes[i].intersectLine( new THREE.Line3( new THREE.Vector3(0,0,0), new THREE.Vector3(0,0,1000) ) ) ;
if(px !== undefined) {
xSprite.position.z = px.z-1 ;
}
};
If there is an intersection, I update the labels' position using the return value of the intersectLine() which is the point of intersection.
This is the updated fiddle : fiddle
I hope that helps. In my case it fit.

A correct test for intersections also has to make sure that the intersection point is actually within the frustum as the frustum planes extend indefinitely, potentially leading to false positive intersections.
One naive way of validating intersections, is checking the distance of the intersection to all planes. If the distance is greater or equal to zero, the point is within the frustum.
Adjusted code snipped from ThanosSar's answer:
const intersect = point => frustum.planes
.map(plane =>
plane.intersectLine(new THREE.Line3(new THREE.Vector3(0, 0, 0), point))
)
.filter(sect => sect != null)
.filter(sect => frustum.planes.every(plane => plane.distanceToPoint(sect) >= -0.000001))[0];
const iy = intersect(new THREE.Vector3(1000, 0, 0));
if (iy != null)
ySprite.position.x = iy.x - 1;
const ix = intersect(new THREE.Vector3(0, 0, 1000));
if (ix != null)
xSprite.position.z = ix.z - 1;
(The comparison is with >= -0.000001 to account for floating point rounding errors)
Fiddle

Related

game maker GML twin stick shooter analogue controller

I am trying to make a twin stick shooter but I cannot get right analogue stick to shoot in the correct direction. Here is the code I have the weapon sits on top of the player and rotates. It all works fine just need to know how to get the correct angle of the right stick and fire a bullet in that direction.
//set depth
depth = -y + obj_player.y_off_set - 1;
//analog left stick face direction
var h_point = gamepad_axis_value(0, gp_axisrh);
var v_point = gamepad_axis_value(0, gp_axisrv);
if ((h_point != 0) || (v_point != 0))
{
var pdir = point_direction(0, 0, h_point, v_point);
var dif = angle_difference(pdir, image_angle);
image_angle += median(-20, dif, 20);
}
//flips gun when turning
if(gamepad_axis_value(0, gp_axisrh) < -0.5)
{
image_yscale = -1;
}else if (gamepad_axis_value(0, gp_axisrh) > 0.5)
{
image_yscale = 1;
}
//fireing
fire = gamepad_button_check_pressed(0, gp_shoulderr) && alarm[0] <= 0;
if(fire)
{
var face = point_direction(0, 0, gp_axisrh, gp_axisrv);
var p = instance_create(x, y, obj_projectile);
var xforce = lengthdir_x(20, face*90);
var yforce = lengthdir_x(20, face*90);
p.creator = id;
with (p){
physics_apply_impulse(x, y, xforce, yforce);
}
as this question is a couple months old, I imagine you found the solution to your problem, but hopefully this answer can help anyone else stuck on the issue.
Based on the code snippet you provided, it looks like if you remove the *90 from the lengthdir_ functions, your code should work.
Here is the code I wrote in my game to get 360 degree shooting working with the right analog stick (this code lives in the step event of the Player object):
if (shooting) {
bullet = instance_create(x, y, Bullet);
with (bullet) {
haxis = gamepad_axis_value(0, gp_axisrh);
vaxis = gamepad_axis_value(0, gp_axisrv);
dir = point_direction(0, 0, haxis, vaxis);
physics_apply_impulse(x, y, lengthdir_x(50, gp_axisrh), lengthdir_y(50, dir));
}
}
This particular thread on the GameMaker community forums was quite helpful as I researched how to solve this issue in my game.

Bidirectional path tracing

I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.

Making sense of a list of GPS values in an iOS application

I have a web service that interfaces with the google maps API to generate a polygon on a google map. The service takes the GPS values and stores them for retrieval.
The problem is that when I try and use these values on my iPhone app the MKPolyline is just either a mess or a bunch of zig-zag lines.
Is there a way to make sense of these values so I can reconstruct the polygon?
My current code looks like this
private void GenerateMap()
{
var latCoord = new List<double>();
var longCoord = new List<double>();
var pad = AppDelegate.Self.db.GetPaddockFromCrop(crop);
mapMapView.MapType = MKMapType.Standard;
mapMapView.ZoomEnabled = true;
mapMapView.ScrollEnabled = false;
mapMapView.OverlayRenderer = (m, o) =>
{
if (o.GetType() == typeof(MKPolyline))
{
var p = new MKPolylineRenderer((MKPolyline)o);
p.LineWidth = 2.0f;
p.StrokeColor = UIColor.Green;
return p;
}
else
return null;
};
scMapType.ValueChanged += (s, e) =>
{
switch (scMapType.SelectedSegment)
{
case 0:
mapMapView.MapType = MKMapType.Standard;
break;
case 1:
mapMapView.MapType = MKMapType.Satellite;
break;
case 2:
mapMapView.MapType = MKMapType.Hybrid;
break;
}
};
if (pad.Boundaries != null)
{
var bounds = pad.Boundaries.OrderBy(t => t.latitude).ThenBy(t => t.longitude).ToList();
foreach (var l in bounds)
{
double lat = l.latitude;
double lon = l.longitude;
latCoord.Add(lat);
longCoord.Add(lon);
}
if (latCoord.Count != 0)
{
if (latCoord.Count > 0)
{
var coord = new List<CLLocationCoordinate2D>();
for (int i = 0; i < latCoord.Count; ++i)
{
var c = new CLLocationCoordinate2D();
c.Latitude = latCoord[i];
c.Longitude = longCoord[i];
coord.Add(c);
}
var line = MKPolyline.FromCoordinates(coord.ToArray());
mapMapView.AddOverlay(line);
mapMapView.SetVisibleMapRect(line.BoundingMapRect, true);
}
}
}
}
MKPolygon / MKPolygonRenderer gives the same sort of random line mess. The OrderBy LINQ makes no difference other than to make the random lines a zig-zag going up or down the view.
Since you don't know the order the points were captured in, you can't trace the actual path traveled around the perimeter of the paddock; this is why your polylines are turning into silly-walks all over the map. Lacking that information, you can at best make an educated guess.
Some possible heuristics you might want to try:
Take the average of all the points to get a "somewhere in the middle" point, then order by atan2(l.latitude - middle.latitude, l.longitude - middle.longitude). (Be careful, atan2 is undefined at (0, 0)!)
Take the convex hull of the points captured: for a relatively small number of points you can get away with the simple quadratic time Jarvis's march. This has the approximate effect of wrapping a notional rubber band around the outside of the map push-pins by discarding points that would form concavities, and should also give you the order of the remaining points.

Getting minimum number of MKMapRects from a MKPolygon

So I have a function that takes two MKMapRect's and the second intersects with the first one. So the function creates an MKPolygon that is the first rect without the intersecting parts:
-(void) polygons:(MKMapRect)fullRect exclude:(MKMapRect)excludeArea{
NSLog(#"Y is: %f height: %f",excludeArea.origin.y,excludeArea.size.height);
double top = excludeArea.origin.y - fullRect.origin.y;
double lft = excludeArea.origin.x - fullRect.origin.x;
double btm = (fullRect.origin.y + fullRect.size.height) - (excludeArea.origin.y + excludeArea.size.height);
double rgt = (fullRect.origin.x + fullRect.size.width) - (excludeArea.origin.x + excludeArea.size.width);
double ot = fullRect.origin.y, it = (ot + top);
double ol = fullRect.origin.x, il = (ol + lft);
double ob = (fullRect.origin.y + fullRect.size.height), ib = (ob - btm);
double or = (fullRect.origin.x + fullRect.size.width), ir = (or - rgt);
MKMapPoint points[11] = {{ol,it}, {ol,ot}, {or,ot}, {or,ob}, {ol,ob}, {ol,it}, {il,it}, {ir,it}, {ir,ib}, {il,ib}, {il,it}};
MKPolygon *polygon = [MKPolygon polygonWithPoints:points count:11];
}
And my question is now how do I get the minimum number of MKMapRects from this MKPolygon? I have done some googling as well as looking through the forum but havn't found anything.
EDIT:
So the goal is the following:
I have a MKMapRect rect1, then I have a list of rectangles, rectList, which is MKMapRects intersecting with rect1 and what I want to do is create a rectilinear MKPolygon of rect1, remove the surface of all MKMapRects in rectList from rect1 and then create the minimum number of MKMaprects from the created rectilinear MKPolygon.
Right now the problem is the following: I am able to create a polygon when removing one MKMapRect from rect1 but I dont know how to remove the following maprects from rect1 and I dont know how to extract the minimum set of MkMapRects from the polygon created.
Best regards
Peep
I'm not sure if this is what you're looking for or if I understand the question fully, but if all you need to know is the minimum number of rectangles in a polygon that's created by subtracting one rectangle from another you should be able to do it by checking the number of corner points in the second rectangle that are contained in the first rectangle. In pseudo code:
int minNumRects(MKRect r1, MKRect r2) {
int numPointsContained = 0;
for (Point p in r2) {
if (MKMapRectContainsPoint(r1, p)) {
numPointsContained++;
}
}
if (numPointsContained == 1) {
return 2;
} else if (numPointsContained == 2) {
return 3;
} else if (numPointsContained == 4) {
return 4;
} else {
return 0;
}
}
P.S. - This assumes that the rectangles are axis-aligned but as far as I know that's the case with MKRects

Rescale X across all plots in ZedGraph

I can successfully zoom x-only across all plots using the following code:
zg1.IsEnableHZoom = true;
zg1.IsEnableVZoom = false;
zg1.IsSynchronizeXAxes = true;
foreach (GraphPane gp in zg1.MasterPane.paneList)
{
> //What code can I put here?
}
My problem is that using this code, the Y-axis remains at a max and min based on the original view of the data. I want the Y-axis to autoscale so that the max and min is ONLY based on the data visible due to the x-axis zoom (per graph pane, of course). Is there some command, or brute force method, that I can use on each of the graph panes in the for loop shown above? Thanks ahead of time for anyone's help.
You can use this in the loop (assuming X Axis scale MinAuto and MaxAuto are false)
foreach (GraphPane gp in zg1.MasterPane.paneList)
{
gp.YAxis.Scale.MinAuto = true;
gp.YAxis.Scale.MaxAuto = true;
// This will force ZedGraph to calculate the Min and the Max of the Y axis
// based on the X axis visible range
gp.IsBoundedRanges = true;
}
zg1.MasterPane.AxisChange();
I had the same problem before and could not find a way other than to inspect all the curve points.
I added an event handler to the Paint event to do this, I'm sure there are ways this can be optimized.
Something like this:
private void graph_Paint(object sender, PaintEventArgs e)
{
double min = Double.MaxValue;
double max = Double.MinValue;
CurveItem curve = graph.GraphPane.CurveList[0];
for (int i = 0; i < curve.Points.Count; i++)
{
if (curve.Points[i].X > graph.GraphPane.XAxis.Scale.Min &&
curve.Points[i].X < graph.GraphPane.XAxis.Scale.Max)
{
min = Math.Min(curve.Points[i].Y, min);
max = Math.Max(curve.Points[i].Y, max);
}
}
if (min != Double.MaxValue)
{
graph.GraphPane.XAxis.Scale.Min = min;
graph.GraphPane.XAxis.Scale.Max = max;
}
}