How to pinch zoom into a certain location in OpenGL ES 2? - opengl-es-2.0

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.

I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

Related

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

THREEJS: Rotating the camera while lookingAt

I have a moving camera in camera container which flies arond the scene on giving paths like an airplane; so it can move to any position x,y,z positive and negative. The camera container is looking at his own future path using a spline curve.
Now I want to rotate the camera using the mouse direction but still keeping the general looking at position while moving forward with object. You could say, i want to turn my head on my body: while moving the body having the general looking at direction, i am turning my head around to lets say 220 degree up and down. So i can't look behind my body.
In my code the cameraContainer is responsible to move on a pline curve and to lookAt the moving direction. The camera is added as a child to the cameraContainer responsible for the rotation using the mouse.
What i don't get working properly is the rotation of the camera. I guess its a very common problem. Lets say the camera when moving only on x-axes moves not straight, it moves like a curve. Specially in different camera positions, the rotation seems very different. I was tryiing to use the cameraContainer to avoid this problem, but the problem seems nothing related to the world coordinates.
Here is what i have:
// camera is in a container
cameraContainer = new THREEJS.Object3D();
cameraContainer.add(camera);
camera.lookAt(0,0,1);
cameraContainer.lookAt(nextPositionOnSplineCurve);
// Here goes the rotation depending on mouse
// Vertical
var mouseVerti = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.y <= instance.domCenterPos.y) // mouse is bottom?
mouseVerti = -1;
// how far is the mouse away from center: 1 most, 0 near
var yMousePerc = Math.abs(Math.ceil((instance.domCenterPos.y - window.App4D.mouse.y) / (instance.domCenterPos.y - instance.domBoundingBox.bottom) * 100) / 100);
var yAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var yRotateRan = mouseVerti * yAngleDiffSide * yMousePerc * Math.PI / 180;
instance.camera.rotation.x += yRotateRan; // rotation x = vertical
// Horizontal
var mouseHori = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.x <= instance.domCenterPos.x) // mouse is left?
mouseHori = -1;
// how far is the mouse away from center: 1 most, 0 near
var xMousePerc = Math.abs(Math.ceil((instance.domCenterPos.x - window.App4D.mouse.x) / (instance.domCenterPos.x - instance.domBoundingBox.right) * 100) / 100);
var xAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var xRotateRan = mouseHori * xAngleDiffSide * xMousePerc * Math.PI / 180;
instance.camera.rotation.y += xRotateRan; // rotation y = horizontal
Would be really thankful if someone can give me a hint!
I got the answer after some more trial and error. The solution is to simply take the initial rotation of y in consideration.
When setting up the camera container and the real camera as child of the container, i had to point the camera to the frontface of the camera container object, in order to let the camera looking in the right direction. That lead to the initial rotation of 0, 3.14, 0 (x,y,z). The solution was to added 3.14 to the y rotation everytime i assigned (as mentioned by WestLangley) the mouse rotation.
cameraReal.lookAt(new THREE.Vector3(0,0,1));
cameraReal.rotation.y = xRotateRan + 3.14;

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Keeping an object made in OpenGL within the boundaries a window

I have been working on a game using objective c and OpenGL. I know how to create the object and how to make it move the way I want, but I cannot keep it within the window. How do you keep the object within the window?
OpenGL FAQ, section 8.070: How can I automatically calculate a view that displays my entire model?:
The following is from a posting by Dave Shreiner on setting up a basic
viewing system:
First, compute a bounding sphere for all objects in your scene. This
should provide you with two bits of information: the center of the
sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it
"diam").
Next, choose a value for the zNear clipping plane. General guidelines
are to choose something larger than, but close to 1.0. So, let's say
you set:
zNear = 1.0;
zFar = zNear + diam;
Structure your matrix calls in this order (for an Orthographic projection):
GLdouble left = c.x - diam;
GLdouble right = c.x + diam;
GLdouble bottom c.y - diam;
GLdouble top = c.y + diam;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, zNear, zFar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a
window with aspect ratio = 1.0). If your window isn't square, compute
left, right, bottom, and top, as above, and put in the following logic
before the call to glOrtho():
GLdouble aspect = (GLdouble) windowWidth / windowHeight;
if ( aspect < 1.0 ) { // window taller than wide
bottom /= aspect;
top /= aspect;
} else {
left *= aspect;
right *= aspect;
}
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you
need to add a viewing transform to it.
A typical viewing transform will go on the ModelView matrix and might
look like this:
gluLookAt(0., 0., 2.*diam,
c.x, c.y, c.z,
0.0, 1.0, 0.0);
a "look at" camera is a very convenient technique to make a viewer track an object (hence however the object or camera point moves, the object will stay onscreen). see 'gluLookAt' for a commonly provided implementation, most 3d helper code libraries will have one. You give it a desired camera point, object of interest, and 'up-vector', and it will create an appropriate world (camera transformation) matrix.
Otherwise if you're in control of the object, just don't move it outside of the initial frustum.

Glut glLoadMatrixf camera equivalent

In my glut application I'm simulating a plane with the camera. When the planes speed is low I intend to have the nose start to point towards the ground as the camera falls. My first instinct was to just change the pitch until it was pointed downwards at -90degrees. However I can't just change the pitch because if the plane is tilted on its side or upside down then it would note be changing direction towards the ground.
Now i'm trying to do a rough simulation of this by shifting the 'lookAt.y' downwards. To do this I am trying to get all the current camera coordinates that I use to set the camera
(eye.x, eye.y, eye.z, look.x, look.y, look.z, up.x, up.y, up.z). Then recall the set with the new modified values.
I've been working with the Camera.cpp and Camera.h to control my camera functions. They can be found here
after adding methods to get all the values, only the eye values are actually updated when various camera motions are made. I guess my question is how do I retrieve these values.
The glLoadMaxtrix call is in this function
void Camera :: setModelViewMatrix(void)
{ // load model view matrix with existing camera values
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0] = u.x; m[4] = u.y; m[8] = u.z; m[12] = -eVec.dot(u);
m[1] = v.x; m[5] = v.y; m[9] = v.z; m[13] = -eVec.dot(v);
m[2] = n.x; m[6] = n.y; m[10] = n.z; m[14] = -eVec.dot(n);
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0;
look.x = u.y; look.y = v.y; look.z = n.y;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
Is there a way to get 'eye', 'lookAt', and 'up' values from the matrix here? Or should I do something else to get these values?
-Thanks in advance for your help
The camera class you link to is not an actual OpenGL class, but it should be simple enough to work with.
The function quoted just takes the current values of the camera object and sends them to OpenGL. If you look at the camera's set function, you can see how the program calculates the values it actually stores.
The eye value is stored directly. The lookAt value is just the value of (eye - n), by vector math. The up value is the hardest, but if I remember my vector math correctly, I believe that up = (n cross u).