LibGDX camera position shifted on movement - camera

I'm programming a game with LibGDX and Box2D and I want my camera to follow my player. But as I zoom in (because Box2Ds metric system, using camera.zoom = x) the camera is shifted when the player moves (the camera follows the player):
Shifted View
Normal View
That only happens when the player moves, so there can't be a problem with the coordinates. My Question is how to remove this shifting as the player moves.
Here's some of my code:
Render Loop (excerpt):
#Override
public void render(float delta) {
//set the camera position to player's position
cam.position.set(player_body.getWorldCenter().x, player_body.getWorldCenter().y, 0);
cam.update();
world.step(Gdx.graphics.getDeltaTime(), 6, 2);
world.clearForces();
handleInput();
Gdx.gl.glClearColor(.05f, .05f, .05f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(cam.combined);
debugRenderer.render(world, cam.combined);
}
Make the player move:
if (Gdx.input.isKeyPressed(Keys.W)) {
player_body.setLinearVelocity(transX, transY);
player.setMoving(true);
}

Related

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

How to pinch zoom into a certain location in OpenGL ES 2?

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

camera position in UNITY

I am making a game(2D) in which an object runs with a velocity and jumps onto the coming platforms and I have made the camera as the child of the game object(i.e the main player) in the hierarchy. The problem is that whenever my game object get rotated on striking the platform or obstacle the main camera also starts getting rotated. I am unable to sort out this problem, Can anyone help with this?
You can either lock rotation for the Camera in the Inspector or you create a Camerascript to follow you player manuell. Seconed is better because you can easily add smoothe, deadzones or camera effects like shake on dmg taken to the camera.
Example for the script from the unity page:
using UnityEngine;
using System.Collections;
public class CompleteCameraController : MonoBehaviour
{
public GameObject player; //Public variable to store a reference to the player game object
private Vector3 offset; //Private variable to store the offset distance between the player and camera
// Use this for initialization
void Start ()
{
//Calculate and store the offset value by getting the distance between the player's position and camera's position.
offset = transform.position - player.transform.position;
}
// LateUpdate is called after Update each frame
void LateUpdate ()
{
// Set the position of the camera's transform to be the same as the player's, but offset by the calculated offset distance.
transform.position = player.transform.position + offset;
}
}
Add this script to your MainCamera. Then drage your PlayerObject into the Field "player" in the camera Inspector.
If you need further help watch this video here

Bound camera to sprite bounds

I have a scene camera following a unit (sprite's position), this unit is standing on a terrain sprite, This camera not only follows it, it also zooms in and out using the pinch gesture. Everything works well, except that I want to limit the camera's lower bound to be limited by the lower bound of the terrain sprite. In other words, I don't want the camera to display anything below the terrain sprite. And by this I mean that instead the camera could shift upwards and preserve the scale.
Any swift, obj-c or even pseudo-code answers would help. Thanks!
This is my attempt
let halfTerrain = terrain.size.height/2
let halfViewport = scene!.size.height/2
let distanceFromCenter = halfTerrain - cameraPosition.y
let scaledDistanceFromCenter = distanceFromCenter/cameraScale
let gapSize = halfViewport - halfTerrain/cameraScale
let isOverlapping = scaledDistanceFromCenter < halfViewport
if isOverlapping {
camera?.runAction(SKAction.moveTo(CGPoint(x: cameraPosition.x,
y: -gapSize * cameraScale), duration: movingSpeed))
} else {
camera?.runAction(SKAction.moveTo(cameraPosition,
duration: movingSpeed))
}
camera?.runAction(SKAction.scaleTo(cameraScale,
duration: scaleSpeed))

Camera as child of Object3D positioning issues

Setup a simple scene here:
http://jsfiddle.net/majman/Sps3c/
I was initially trying to demonstrate a problem I was having with rotating a parent container while having the camera maintain it's relative offset position, but when setting up this example I couldn't even adjust the camera's initial position.
Current Problem:
// camera
camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 5, 150);
camera.position.z = 50; // this doesn't work?
// object to contain camera & helper
cameraContainer = new THREE.Object3D();
cameraContainer.rotation.order = "YXZ"; // maybe not necessary
// add to container
cameraContainer.add(camera);
scene.add(cameraContainer);
Now when rotating the cameraContainer, the camera's rotation follows - but I'd like the camera's position to be offset from the cameraContainer. I'm unable to modify any position properties for some reason.
Your code is working fine. You are confused because the CameraHelper is not displaying the camera in its actual position. You need to add the CameraHelper as a child of the scene.
// camera helper
cameraHelper = new THREE.CameraHelper( camera2 );
scene.add( cameraHelper );
updated fiddle: http://jsfiddle.net/Sps3c/1/
Tip: I added an OrbitController to your demo for a better view of the situation.
three.js r.66