3d space to 2d space transformation LWJGL 3 (Minecraft) - minecraft

I'm a mod developer for minecraft, and I'm having problems after switching to a newer version of the game.
It started using LWJGL3, and because of that I don't know how to implement this method.
public Vector3f world2Screen(float x, float y, float z) {
if (GLU.gluProject(x, y, z, ActiveRenderInfo.MODELVIEW, ActiveRenderInfo.PROJECTION, ActiveRenderInfo.VIEWPORT,
vector))
return new Vector3f(vector.get(0) / 2f, (Display.getHeight() - vector.get(1)) / 2f, vector.get(2));
return null;
}
There used to be a GLU.gluProject() method in LWJGL2, but as I see it is gone. What are the alternatives.

Related

Cython: copy memoryview without gil

I have the following code section (appropriately simplified)
cpdef double func(double[:] x, double[:] y) nogil:
cdef:
double[:] _y
_y = y # Here's my trouble
_y[2] = 2. - y[1]
_y[1] = 1.
return func2(x, _y)
I'm trying to create a copy of y that I can manipulate in the function. The problem is, any changes made to _y get passed back to y. I don't want to make changes to y, just to this temporary copy of it.
The function is nogil, so I can't use _y = y.copy(). (already tried). I also tried _y[:] = y, based on the cython guidance pages, but I apparently can't do that if _y hasn't been initialized yet.
So... how do I make a copy of a 1d vector without invoking the gil?

How to properly use the UnProject method in OpenTK and get world coordinates from mouse hover?

I am trying to get world coordinates by hovering my mouse over the GLControl control where the world is being rendered in. I've found dozens of examples and solutions but I can't work with them, I am still learning and haven't got to their point. But I kept digging and found some examples and picked one that is approved by its original author which works with a similar render construction but still doesn't seem to work for me.
I've set up a window to output the converted coordinates. I am getting x, y = 0 at the center of the control, which means that I'm possibly going the right way. When I move the mouse around the x and y axis it outputs proper values (eg. the higher the mouse from the center of the control, the higher y is, etc.), but only to like -1.65/1.65 max from the center. It does not seem like it gets my Modelview matrix, because It should differ depending on the eyeZ, which it doesn't. Changing the eyeZ has no effect at all. It still acts as if I didn't "zoom in/out".
I've set up my Projection, Modelview matrices and Viewport like this:
GL.MatrixMode(MatrixMode.Projection)
GL.LoadIdentity()
GL.LoadMatrix(Matrix4.CreatePerspectiveFieldOfView(Math.PI / 4, glControl.Width / glControl.Height, 1.0F, 128.0F))
GL.MatrixMode(MatrixMode.Modelview)
GL.LoadIdentity()
GL.LoadMatrix(Matrix4.LookAt(0F, 0F, zoom, 0F, 0F, 0F, 0F, 1.0F, 0F))
GL.Viewport(0, 0, glControl.Width, glControl.Height)
Then I have these two functions that get called when my mouse moves over the GLControl and pass in the mouse's coordinates.
Public Function screenToWorld(ByVal x As Integer, ByVal y As Integer) As Vector4
Dim modelViewMatrix As Matrix4
Dim projectionMatrix As Matrix4
Dim viewport(4) As Integer
GL.GetFloat(GetPName.ModelviewMatrix, modelViewMatrix)
GL.GetFloat(GetPName.ProjectionMatrix, projectionMatrix)
GL.GetInteger(GetPName.Viewport, viewport)
Return UnProject(projectionMatrix, modelViewMatrix, New Size(viewport(2), viewport(3)), New Vector2(x, y))
End Function
Public Function UnProject(ByRef projection As Matrix4, ByVal view As Matrix4, ByVal viewport As Size, ByVal mouse As Vector2) As Vector4
Dim vec As Vector4
vec.X = 2.0F * mouse.X / viewport.Width - 1
vec.Y = -(2.0F * mouse.Y / viewport.Height - 1)
vec.Z = 0F
vec.W = 1.0F
Dim viewInv As Matrix4 = Matrix4.Invert(view)
Dim projInv As Matrix4 = Matrix4.Invert(projection)
Vector4.Transform(vec, projInv, vec)
Vector4.Transform(vec, viewInv, vec)
If vec.W > Single.Epsilon OrElse vec.W < Single.Epsilon Then
vec.X /= vec.W
vec.Y /= vec.W
vec.Z /= vec.W
End If
Return vec
End Function
Also I refrain from using orthrographic projection.

determinate: is point on line segment

I am trying to code a java methods which returns a Boolean true if a point(x,y) is on a line segment and false if not.
I tried this:
public static boolean OnDistance(MyLocation a, MyLocation b, MyLocation queryPoint) {
double value = java.lang.Math.signum((a.mLongitude - b.mLongitude) * (queryPoint.mLatitude - a.mLatitude)
- (b.mLatitude - a.mLatitude) * (queryPoint.mLongitude - a.mLongitude));
double compare = 1;
if (value == compare) {
return true;
}
return false;
}
but it doesn't work.
I am not JAVA coder so I stick to math behind ... For starters let assume you are on plane (not sphere surface)
I would use Vector math so let:
a,b - be the line endpoints
q - queried point
c=q-a - queried line direction vector
d=b-a - line direction vector
use dot product for parameter extraction
t=dot(c,d)/(|c|*|d|)
t is line parameter <0,1> if out of range q is not inside line
|c|=sqrt(c.x*c.x+c.y*c.y) size of vector
dot(c,d)=c.x*d.x+c.y*d.y scalar vector multiply
now compute corresponding point on line
e=a+(t*d)
e is the closest point to q on the line ab
compute perpendicular distance of q and ab
l=|q-e|;
if (l>treshold) then q is not on line ab else it is on the line ab. The threshold is the max distance from line you are still accepting as inside line. No need to have l sqrt-ed the threshold constant can be powered by 2 instead for speed.
if you add all this to single equation
then some things will simplify itself (hope did not make some silly math mistake)
l=|(q-a)-(b-a)*(dot(q-a,b-a)/|b-a|^2)|;
return (l<=treshold);
or
l=|c-(d*dot(c,d)/|d|^2)|;
return (l<=treshold);
As you can see we do not even need sqrt for this :)
[Notes]
If you need spherical or ellipsoidal surface instead then you need to specify it closer which it is what are the semi axises. The line become arc/curve and need some corrections which depends on the shape of surface see
Projecting a point onto a path
but can be done also by approximation and may be also by binary search of point e see:
mine approx class in C++
The vector math used can be found here at the end:
Understanding 4x4 homogenous transform matrices
Here 3D C++ implementation (with different names):
double distance_point_axis(double *p,double *p0,double *dp)
{
int i;
double l,d,q[3];
for (i=0;i<3;i++) q[i]=p[i]-p0[i]; // q = p-p0
for (l=0.0,i=0;i<3;i++) l+=dp[i]*dp[i]; // l = |dp|^2
for (d=0.0,i=0;i<3;i++) d+=q[i]*dp[i]; // d = dot(q,dp)
if (l<1e-10) d=0.0; else d/=l; // d = dot(q,dp)/|dp|^2
for (i=0;i<3;i++) q[i]-=dp[i]*d; // q=q-dp*dot(q,dp)/|dp|^2
for (l=0.0,i=0;i<3;i++) l+=q[i]*q[i]; l=sqrt(l); // l = |q|
return l;
}
Where p0[3] is any point on axis and dp[3] is direction vector of axis. The p[3] is the queried point you want the distance to axis for.

Convert from latitude, longitude to x, y

I want to convert GPS location (latitude, longitude) into x,y coordinates.
I found many links about this topic and applied it, but it doesn't give me the correct answer!
I am following these steps to test the answer:
(1) firstly, i take two positions and calculate the distance between them using maps.
(2) then convert the two positions into x,y coordinates.
(3) then again calculate distance between the two points in the x,y coordinates
and see if it give me the same result in point(1) or not.
one of the solution i found the following, but it doesn't give me correct answer!
latitude = Math.PI * latitude / 180;
longitude = Math.PI * longitude / 180;
// adjust position by radians
latitude -= 1.570795765134; // subtract 90 degrees (in radians)
// and switch z and y
xPos = (app.radius) * Math.sin(latitude) * Math.cos(longitude);
zPos = (app.radius) * Math.sin(latitude) * Math.sin(longitude);
yPos = (app.radius) * Math.cos(latitude);
also i tried this link but still not work with me well!
any help how to convert from(latitude, longitude) to (x,y) ?
Thanks,
No exact solution exists
There is no isometric map from the sphere to the plane. When you convert lat/lon coordinates from the sphere to x/y coordinates in the plane, you cannot hope that all lengths will be preserved by this operation. You have to accept some kind of deformation. Many different map projections do exist, which can achieve different compromises between preservations of lengths, angles and areas. For smallish parts of earth's surface, transverse Mercator is quite common. You might have heard about UTM. But there are many more.
The formulas you quote compute x/y/z, i.e. a point in 3D space. But even there you'd not get correct distances automatically. The shortest distance between two points on the surface of the sphere would go through that sphere, whereas distances on the earth are mostly geodesic lengths following the surface. So they will be longer.
Approximation for small areas
If the part of the surface of the earth which you want to draw is relatively small, then you can use a very simple approximation. You can simply use the horizontal axis x to denote longitude λ, the vertical axis y to denote latitude φ. The ratio between these should not be 1:1, though. Instead you should use cos(φ0) as the aspect ratio, where φ0 denotes a latitude close to the center of your map. Furthermore, to convert from angles (measured in radians) to lengths, you multiply by the radius of the earth (which in this model is assumed to be a sphere).
x = r λ cos(φ0)
y = r φ
This is simple equirectangular projection. In most cases, you'll be able to compute cos(φ0) only once, which makes subsequent computations of large numbers of points really cheap.
I want to share with you how I managed the problem. I've used the equirectangular projection just like #MvG said, but this method gives you X and Y positions related to the globe (or the entire map), this means that you get global positions. In my case, I wanted to convert coordinates in a small area (about 500m square), so I related the projection point to another 2 points, getting the global positions and relating to local (on screen) positions, just like this:
First, I choose 2 points (top-left and bottom-right) around the area where I want to project, just like this picture:
Once I have the global reference area in lat and lng, I do the same for screen positions. The objects containing this data are shown below.
//top-left reference point
var p0 = {
scrX: 23.69, // Minimum X position on screen
scrY: -0.5, // Minimum Y position on screen
lat: -22.814895, // Latitude
lng: -47.072892 // Longitude
}
//bottom-right reference point
var p1 = {
scrX: 276, // Maximum X position on screen
scrY: 178.9, // Maximum Y position on screen
lat: -22.816419, // Latitude
lng: -47.070563 // Longitude
}
var radius = 6371; //Earth Radius in Km
//## Now I can calculate the global X and Y for each reference point ##\\
// This function converts lat and lng coordinates to GLOBAL X and Y positions
function latlngToGlobalXY(lat, lng){
//Calculates x based on cos of average of the latitudes
let x = radius*lng*Math.cos((p0.lat + p1.lat)/2);
//Calculates y based on latitude
let y = radius*lat;
return {x: x, y: y}
}
// Calculate global X and Y for top-left reference point
p0.pos = latlngToGlobalXY(p0.lat, p0.lng);
// Calculate global X and Y for bottom-right reference point
p1.pos = latlngToGlobalXY(p1.lat, p1.lng);
/*
* This gives me the X and Y in relation to map for the 2 reference points.
* Now we have the global AND screen areas and then we can relate both for the projection point.
*/
// This function converts lat and lng coordinates to SCREEN X and Y positions
function latlngToScreenXY(lat, lng){
//Calculate global X and Y for projection point
let pos = latlngToGlobalXY(lat, lng);
//Calculate the percentage of Global X position in relation to total global width
pos.perX = ((pos.x-p0.pos.x)/(p1.pos.x - p0.pos.x));
//Calculate the percentage of Global Y position in relation to total global height
pos.perY = ((pos.y-p0.pos.y)/(p1.pos.y - p0.pos.y));
//Returns the screen position based on reference points
return {
x: p0.scrX + (p1.scrX - p0.scrX)*pos.perX,
y: p0.scrY + (p1.scrY - p0.scrY)*pos.perY
}
}
//# The usage is like this #\\
var pos = latlngToScreenXY(-22.815319, -47.071718);
$point = $("#point-to-project");
$point.css("left", pos.x+"em");
$point.css("top", pos.y+"em");
As you can see, I made this in javascript, but the calculations can be translated to any language.
P.S. I'm applying the converted positions to an HTML element whose id is "point-to-project". To use this piece of code on your project, you shall create this element (styled as position absolute) or change the "usage" block.
Since this page shows up on top of google while i searched for this same problem, I would like to provide a more practical answers. The answer by MVG is correct but rather theoratical.
I have made a track plotting app for the fitbit ionic in javascript. The code below is how I tackled the problem.
//LOCATION PROVIDER
index.js
var gpsFix = false;
var circumferenceAtLat = 0;
function locationSuccess(pos){
if(!gpsFix){
gpsFix = true;
circumferenceAtLat = Math.cos(pos.coords.latitude*0.01745329251)*111305;
}
pos.x:Math.round(pos.coords.longitude*circumferenceAtLat),
pos.y:Math.round(pos.coords.latitude*110919),
plotTrack(pos);
}
plotting.js
plotTrack(position){
let x = Math.round((this.segments[i].start.x - this.bounds.minX)*this.scale);
let y = Math.round(this.bounds.maxY - this.segments[i].start.y)*this.scale; //heights needs to be inverted
//redraw?
let redraw = false;
//x or y bounds?
if(position.x>this.bounds.maxX){
this.bounds.maxX = (position.x-this.bounds.minX)*1.1+this.bounds.minX; //increase by 10%
redraw = true;
}
if(position.x<this.bounds.minX){
this.bounds.minX = this.bounds.maxX-(this.bounds.maxX-position.x)*1.1;
redraw = true;
};
if(position.y>this.bounds.maxY){
this.bounds.maxY = (position.y-this.bounds.minY)*1.1+this.bounds.minY; //increase by 10%
redraw = true;
}
if(position.y<this.bounds.minY){
this.bounds.minY = this.bounds.maxY-(this.bounds.maxY-position.y)*1.1;
redraw = true;
}
if(redraw){
reDraw();
}
}
function reDraw(){
let xScale = device.screen.width / (this.bounds.maxX-this.bounds.minX);
let yScale = device.screen.height / (this.bounds.maxY-this.bounds.minY);
if(xScale<yScale) this.scale = xScale;
else this.scale = yScale;
//Loop trough your object to redraw all of them
}
For completeness I like to add my python adaption of #allexrm code which worked really well. Thanks again!
radius = 6371 #Earth Radius in KM
class referencePoint:
def __init__(self, scrX, scrY, lat, lng):
self.scrX = scrX
self.scrY = scrY
self.lat = lat
self.lng = lng
# Calculate global X and Y for top-left reference point
p0 = referencePoint(0, 0, 52.526470, 13.403215)
# Calculate global X and Y for bottom-right reference point
p1 = referencePoint(2244, 2060, 52.525035, 13.405809)
# This function converts lat and lng coordinates to GLOBAL X and Y positions
def latlngToGlobalXY(lat, lng):
# Calculates x based on cos of average of the latitudes
x = radius*lng*math.cos((p0.lat + p1.lat)/2)
# Calculates y based on latitude
y = radius*lat
return {'x': x, 'y': y}
# This function converts lat and lng coordinates to SCREEN X and Y positions
def latlngToScreenXY(lat, lng):
# Calculate global X and Y for projection point
pos = latlngToGlobalXY(lat, lng)
# Calculate the percentage of Global X position in relation to total global width
perX = ((pos['x']-p0.pos['x'])/(p1.pos['x'] - p0.pos['x']))
# Calculate the percentage of Global Y position in relation to total global height
perY = ((pos['y']-p0.pos['y'])/(p1.pos['y'] - p0.pos['y']))
# Returns the screen position based on reference points
return {
'x': p0.scrX + (p1.scrX - p0.scrX)*perX,
'y': p0.scrY + (p1.scrY - p0.scrY)*perY
}
pos = latlngToScreenXY(52.525607, 13.404572);
pos['x] and pos['y] contain the translated x & y coordinates of the lat & lng (52.525607, 13.404572)
I hope this is helpful for anyone looking like me for the proper solution to the problem of translating lat lng into a local reference coordinate system.
Best
Its better to convert to utm coordinates, and treat that as x and y.
import utm
u = utm.from_latlon(12.917091, 77.573586)
The result will be (779260.623156606, 1429369.8665238516, 43, 'P')
The first two can be treated as x,y coordinates, the 43P is the UTM Zone, which can be ignored for small areas (width upto 668 km).

How to apply a filter on CMRotationMatrix using a CADisplayLink

How to apply a filter on CMRotationMatrix? maybe kalman filter. I need fix the noise of CMRotationMatrix (transformFromCMRotationMatrix), to get a linear values of result matrix
This matrix values will be convert to XYZ, in my case I'm simulate 3D on 2D screen like that:
// Casting matrix to x, y
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, boxMatrix);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
CGPointMake(x * self.bounds.size.width, self.bounds.size.height - (y * self.bounds.size.height));
code:
// define variable
mat4f_t cameraTransform;
// start the display link loop
- (void)startDisplayLink
{
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(onDisplayLink:)];
[displayLink setFrameInterval:1];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
// stop the display link loop
- (void)stopDisplayLink
{
[displayLink invalidate];
displayLink = nil;
}
// event of display link
- (void)onDisplayLink:(id)sender
{
CMDeviceMotion *d = motionManager.deviceMotion;
if (d != nil) {
CMRotationMatrix r = d.attitude.rotationMatrix;
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
}
}
// function trigger before [self setNeedDisplay];
void transformFromCMRotationMatrix(vec4f_t mout, const CMRotationMatrix *m)
{
mout[0] = (float)m->m11;
mout[1] = (float)m->m21;
mout[2] = (float)m->m31;
mout[3] = 0.0f;
mout[4] = (float)m->m12;
mout[5] = (float)m->m22;
mout[6] = (float)m->m32;
mout[7] = 0.0f;
mout[8] = (float)m->m13;
mout[9] = (float)m->m23;
mout[10] = (float)m->m33;
mout[11] = 0.0f;
mout[12] = 0.0f;
mout[13] = 0.0f;
mout[14] = 0.0f;
mout[15] = 1.0f;
}
// Matrix-vector and matrix-matricx multiplication routines
void multiplyMatrixAndVector(vec4f_t vout, const mat4f_t m, const vec4f_t v)
{
vout[0] = m[0]*v[0] + m[4]*v[1] + m[8]*v[2] + m[12]*v[3];
vout[1] = m[1]*v[0] + m[5]*v[1] + m[9]*v[2] + m[13]*v[3];
vout[2] = m[2]*v[0] + m[6]*v[1] + m[10]*v[2] + m[14]*v[3];
vout[3] = m[3]*v[0] + m[7]*v[1] + m[11]*v[2] + m[15]*v[3];
}
In general I would distinguish between improving the signal noise ratio and smooting the signal.
Signal Improvement
If you really want to be better than Apple's Core Motion which has already a sensor fusion algorithm implemented, stay prepared for a long term project with an uncertain outcome. In this case you would be better off to take the raw accelerometer and gyro signals to build your own sensor fusion algorithm but you have to care about a lot of problems like drift, hardware dependency of different iPhone versions, hardware differences of the sensors within same generation, ... So my advice: try everything to avoid it.
Smoothing
This just means interpolating two or more signals and building kind of average. I don't know about any suitable approach to use for rotation matrices directly (maybe there is one) but you can use quaternions instead (more resources: OpenGL Tutorial Using Quaternions to represent rotation or Quaternion FAQ).
The resulting quaternion of such an interpolation can be multiplied with your vector to get the projection similarly to the matrix way (you may look at Finding normal vector to iOS device for more information).
Interpolation between two unit quaternions representing rotations can be accomplished with Slerp. In practice you will use what is described as Geometric Slerp in Wikipedia. If you have two points in time t1 and t2 and the corresponding quaternions q1 and q2 and the angular distance omega between them, the formula is:
q'(q1, q2, t) = sin((1- t) * omega) / sin(omega) * q0 + sin(t * omega) / sin(omega) * q1
t should be 0.5 because you want the average between both rotations. Omega can be calculated by the dot product:
cos(omega) = q1.q2 = w1*w2 + x1*x2 + y1*y2 + z1*z2
If this approach using two quaternions still doesn't match your needs, you can repeat this by using slerp (slerp (q1, q2), slerp (q3, q4)). Some notes:
From a performance point fo view it's not that cheap to perform three sin and one arccos call in your run loop 1/frequency times per second. Thus you should avoid using too many points
In your case all signals are close to each other especially when using high sensor frequencies. You have to take care about angles that are very small and let 1/sin(omega) explode. In this case set sin(x) ≈ x
Like in other filters like low-pass filter the more points in time you use the more time delay you get. So if you have frequency f you will get about 0.5/f sec delay when using two points and 1.5/f for the double slerp.
If something appears weird, check that your resulting quaternions are unit quaternions i.e. ||q|| = 1
If you are running into performance issues you might have a look at Hacking Quaternions
The C++ project pbrt at github contains a quaternion class to get some inspiration from.