lwjgl FPS spectator mode - camera

I'm trying to write a program that uses lwjgl, and it involves flying around in first person, sort of like spectator mode in an FPS game - you fly in whatever direction you're looking. I know how to do an FPS camera walking on the ground, but this is supposed to go up and down as well. I've tried to do something, but it's atrociously inaccurate.
The following code is in the class responsible for camera angle (positive y is up):
public void move(double mx, double my, double mz)
{
this.x += mx;
this.y += my;
this.z += mz;
}
public void moveForward()
{
rx = toDeg(rx);
float speed = 0.25f;
double xsin = Math.sin( Math.toRadians( rx ) );
double ysin = Math.sin( Math.toRadians(
( ry + Math.signum( toDeg( rx + 90.00001f ) - 180 ) * -90 )
));
double ycos = Math.cos(Math.toRadians(
( ry + Math.signum( toDeg( rx + 90.00001f ) - 180 ) * -90 )
));
this.move( speed * ycos, speed * xsin, speed * ysin );
}
Thanks!

Nvm, I figured it out-
public void moveForward()
{
rx = toDeg(rx);
float speed = 0.25f;
double xsin = Math.sin(Math.toRadians(rx));
double xcos = Math.cos(Math.toRadians(rx));
double flatLen = xcos * speed;
double ysin = Math.sin(Math.toRadians((ry + 90)));
double ycos = Math.cos(Math.toRadians((ry + 90)));
this.move(
flatLen * ycos,
speed * xsin,
flatLen * ysin);
}

Related

Bidirectional path tracing

I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.

Distance between point and finite line in objective-c

I've looked up some formulas relating to finding the distance a point and a line. On this page, I used example 14
http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html
I have a method that has turned into this:
+(bool) checkPointNearBetweenPointsWithPointA:(CGPoint)pointA withPointB:(CGPoint)pointB withPointC:(CGPoint)pointC withLimit:(float)limit {
float A = pointB.x - pointA.x;
float B = pointA.y - pointC.y;
float C = pointA.x - pointC.x;
float D = pointB.y - pointA.y;
float dividend = fabs( A * B ) - ( C * D );
float divisor = sqrt(pow(A,2) + pow(D,2));
float distanceBetweenPointAndLine = dividend / divisor;
if(distanceBetweenPointAndLine < limit){
NSLog(#"distanceBetweenPointAndLine = %f",distanceBetweenPointAndLine);
return YES;
}
return NO;
}
The problem is that it still returns YES if I'm passed point B, if the line segment is drawn like B----A. Other screwed up things happen to depending on which angle the line is drawn. Just wondering if I need to consider anything else if testing to see if a point is near a finite line. Most examples I see online deal with lines of infinite length.
try my code below. line is considered to exist between points A & B (regardless of how you draw it B->A or A->B ) and point C is the point in consideration to measure the distance.
+ (bool) checkPointNearBetweenPointsWithPointA:(CGPoint)pointA
withPointB:(CGPoint)pointB
withPointC:(CGPoint)pointC
withLimit:(float)limit
{
CGFloat slopeLine = atan((pointB.y-pointA.y)/(pointB.x-pointA.x) );
CGFloat slopePointToPointA = -1 *atan((pointC.y-pointA.y)/(pointC.x-pointA.x));
CGFloat innerAngle = slopeLine + slopePointToPointA;
CGFloat distanceAC = sqrtf(pow(pointC.y-pointA.y,2) + pow(pointC.x-pointA.x,2));
CGFloat distanceBetweenPointAndLine = fabs(distanceAC * sin(innerAngle));
NSLog(#"distanceBetweenPointAndLine = %f",distanceBetweenPointAndLine);
NSLog(#"is exceeding limit ? %#",distanceBetweenPointAndLine > limit ? #"YES":#"NO");
if(distanceBetweenPointAndLine < limit)
{
return YES;
}
return NO;
}

Calculating Center of mass of body being tracked using kinect?

I am working on Kinect for my research project . I have worked previously to calculate the joint angle of kinect and the joint coordinates. I would like to calculate the center of mass of the body which is being tracked.
Any idea would be appreciated and code snippets would be immensely helpful.
I owe a lot to stack overflow without the community help it would had not been possible to do such a thing.
Thanks in Advance
Please find the code where i want to include this center of mass function. This function tracks the skeleton.
Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
{
using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
{
if (skeletonFrameData == null)
{
return null;
}
skeletonFrameData.CopySkeletonDataTo(allSkeletons);
//get the first tracked skeleton
Skeleton first = (from s in allSkeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s).FirstOrDefault();
return first;
}
I have tried using this code in my code but its not getting accustomed , can any one please help me include the center of mass code.
oreach (SkeletonData data in skeletonFrame.Skeletons) {
SkeletonFrame allskeleton = e.SkeletonFrame;
// Count passive and active person up to six in the group
int numberOfSkeletonsT = (from s in allskeleton.Skeletons
where s.TrackingState == SkeletonTrackingState.Tracked select s).Count();
int numberOfSkeletonsP = (from s in allskeleton.Skeletons
where s.TrackingState == SkeletonTrackingState.PositionOnly select s).Count();
// Count passive and active person up to six in the group
int totalSkeletons = numberOfSkeletonsP + numberOfSkeletonsT;
//Console.WriteLine("TotalSkeletons = " + totalSkeletons);
//======================================================
if (data.TrackingState == SkeletonTrackingState.PositionOnly)
{
foreach (Joint joint in data.Joints)
{
if (joint.Position.Z != 0)
{
double centerofmassX = com.Position.X;
double centerofmassY = com.Position.Y;
double centerofmassZ = com.Position.Z;
Console.WriteLine( centerofmassX + centerofmassY + centerofmassZ );
}
}
See a couple of resources here:
http://mathwiki.ucdavis.edu/Calculus/Vector_Calculus/Multiple_Integrals/Moments_and_Centers_of_Mass#Three-Dimensional_Solids
http://www.slideshare.net/GillianWinters/center-of-mass-presentation
http://en.wikipedia.org/wiki/Locating_the_center_of_mass
Basically no matter what, you are going to need to find the mass of your user. This can be a simple input, then you can determine how much weight the person puts on each foot and use the equations described at all of these sources. Another option may be to use plumb lines on a planar shape representation of the user in 2D, However that won't be the actually accurate 3D center of mass.
Here is an example of how to find what amount of mass is on each foot. using the equation found on http://www.vitutor.com/geometry/distance/line_plane.html
Vector3 v = new Vector3(skeleton.Joints[JointType.Head].Position.X, skeleton.Joints[JointType.Head].Position.Y, skeleton.Joints[JointType.Head].Position.Z);
double mass;
double leftM, rightM;
double A = sFrame.FloorClipPlane.X,
B = sFrame.FloorClipPlane.Y,
C = sFrame.FloorClipPlane.Z;
//find angle
double angle = Math.ASin(Math.Abs(A * v.X + B * v.Y * C * v.Z)/(Math.Sqrt(A * A + B * B + C * C) * Math.Sqrt(v.X * v.X + v.Y * v.Y + v.Z * v.Z)));
if (angle == 90.0)
{
leftM = mass / 2.0;
rightM = mass / 2.0;
}
double distanceFrom90 = 90.0 - angle;
if (distanceFrom90 > 0)
{
double leftMultiple = distanceFrom90 / 90.0;
leftM = mass * leftMultiple;
rightM = mass - leftM;
}
else
{
double rightMultiple = distanceFrom90 / 90.0;
rightM = rightMultiple * mass;
leftM = mass - rightMultiple;
}
This is of course assuming that the user is on both feet, but you could modify the code to create a new plane based off the users feet instead of the automatic one generated by Kinect.
The code to then find the center of mass you have to choose a datum. I would choose the head as that is the top of the person, and you can measure down from it easily. Using the steps found here:
double distanceFromDatumLeft = Math.Sqrt(Math.Pow(headX - footLeftX, 2) + Math.Pow(headY - footLeftY, 2) + Math.Pow(headZ - footLeftZ, 2));
double distanceFromDatumLeft = Math.Sqrt(Math.Pow(headX - footRightX, 2) + Math.Pow(headY - footRightY, 2) + Math.Pow(headZ - footRightZ, 2));
double momentLeft = distanceFromDatumLeft * leftM;
double momentRight = distanceFromDatumRight * rightM;
double momentSum = momentLeft + momentRight;
//measured in units from the datum
double centerOfGravity = momentSum / mass;
You then can of course show this on the screen by passing a point to plot that is centerOfGravity points below the head.

how to use skeletal joint to act as cursor using bounds (No gestures)

I just want to be able to do something when my skeletal joint (x,y,z) coordinates are over the x,y,z coordinates of the button . . I have the following code but somehow it doesnt work properly . .as soon as my hand moves it will do something without my hand reaching the button
if (skeletonFrame != null)
{
//int skeletonSlot = 0;
Skeleton[] skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength];
skeletonFrame.CopySkeletonDataTo(skeletonData);
Skeleton playerSkeleton = (from s in skeletonData where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault();
if (playerSkeleton != null)
{
Joint rightHand = playerSkeleton.Joints[JointType.HandRight];
handPosition = new Vector2((((0.5f * rightHand.Position.X) + 0.5f) * (640)), (((-0.5f * rightHand.Position.Y) + 0.5f) * (480)));
var rightHands = playerSkeleton.Joints[JointType.HandRight];
var rightHandsX = rightHands.Position.X;
var rightHandsY = rightHands.Position.Y;
var rightHandsZ = rightHands.Position.Z;
if (Math.Sqrt(Math.Pow(rightHandsX - position.X, 2) + Math.Pow(rightHandsY - position.Y, 2)) < 20)
{
// Exit();
}
if (Math.Sqrt(Math.Pow(rightHandsX - start_bttn.Bounds.X, 1) + Math.Pow(rightHandsY - start_bttn.Bounds.Y, 1)) < 10)
{
currentGameState = GameState.Selection;
// Exit();
}
if ((rightHandsX < GraphicsDevice.Viewport.Width / 2 + 150 && rightHandsX > GraphicsDevice.Viewport.Width / 2 - 75) && (rightHandsY > GraphicsDevice.Viewport.Height / 2 && rightHandsY < GraphicsDevice.Viewport.Height / 2 + 50))
{
currentGameState = GameState.Selection;
}
}
Here is my hand tracking function. See if it does what you want, or gets you closer...
private void TrackHandMovement(Skeleton skeleton)
{
Joint leftHand = skeleton.Joints[JointType.HandLeft];
Joint rightHand = skeleton.Joints[JointType.HandRight];
Joint leftShoulder = skeleton.Joints[JointType.ShoulderLeft];
Joint rightShoulder = skeleton.Joints[JointType.ShoulderRight];
Joint rightHip = skeleton.Joints[JointType.HipRight];
// the right hand joint is being tracked
if (rightHand.TrackingState == JointTrackingState.Tracked)
{
// the hand is sufficiently in front of the shoulder
if (rightShoulder.Position.Z - rightHand.Position.Z > 0.4)
{
double xScaled = (rightHand.Position.X - leftShoulder.Position.X) / ((rightShoulder.Position.X - leftShoulder.Position.X) * 2) * SystemParameters.PrimaryScreenWidth;
double yScaled = (rightHand.Position.Y - rightShoulder.Position.Y) / (rightHip.Position.Y - rightShoulder.Position.Y) * SystemParameters.PrimaryScreenHeight;
// the hand has moved enough to update screen position (jitter control / smoothing)
if (Math.Abs(rightHand.Position.X - xPrevious) > MoveThreshold || Math.Abs(rightHand.Position.Y - yPrevious) > MoveThreshold)
{
RightHandX = xScaled;
RightHandY = yScaled;
xPrevious = rightHand.Position.X;
yPrevious = rightHand.Position.Y;
// reset the tracking timer
trackingTimerCounter = 10;
}
}
}
}
There is a bit of math in there to translate the hand position to the screen position. Different strokes for different folks, but my logic is:
Shoulders = top of screen
Hips = bottom of screen
Left Should = left most on screen
To get the right most screen position, I take the distance between the left & right shoulder and add it to the right shoulder.

Get the GPS coordinate given the current location, bearing and distance

I'm trying to get a GPS coordinate given the current location, bearing and distance.
But I can't find any related to cocoa-touch.
I've found the formula here Get lat/long given current point, distance and bearing
Is there an easier method for iOS?
Thanks in advance.
Converted the code in the link to Objective-C.
- (double)degreesToRadians:(double)degrees {
return degrees * M_PI / 180;
}
- (double)radiansToDegrees:(double)radians {
return radians * 180/ M_PI;
}
- (CLLocationCoordinate2D)remoteCoordinate:(CLLocationCoordinate2D)localCoordinate withDistance:(double)distance withBearing:(double)bearing {
double earthRadius = 6378.1; // Radius of Earth in kilometres.
double rLat1 = [self degreesToRadians:localCoordinate.latitude]; // Convert latitude to radians
double rLon1 = [self degreesToRadians:localCoordinate.longitude]; // Convert longitude to radians
double rLat2 = asinl( sinl(rLat1) * cosl(distance / earthRadius) + cosl(rLat1) * sinl(distance / earthRadius) * cosl(bearing) );
double rLon2 = rLon1 + atan2l( sinl(bearing) * sinl(distance/earthRadius) * cosl(rLat1), cosl(distance/earthRadius) - sinl(rLat1) * sinl(rLat2) );
double dLat2 = [self radiansToDegrees:rLat2]; // Convert latitude to degrees
double dLon2 = [self radiansToDegrees:rLon2]; // Convert longuitude to degrees
return CLLocationCoordinate2DMake(dLat2, dLon2);
}