Keeping an object made in OpenGL within the boundaries a window - objective-c

I have been working on a game using objective c and OpenGL. I know how to create the object and how to make it move the way I want, but I cannot keep it within the window. How do you keep the object within the window?

OpenGL FAQ, section 8.070: How can I automatically calculate a view that displays my entire model?:
The following is from a posting by Dave Shreiner on setting up a basic
viewing system:
First, compute a bounding sphere for all objects in your scene. This
should provide you with two bits of information: the center of the
sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it
"diam").
Next, choose a value for the zNear clipping plane. General guidelines
are to choose something larger than, but close to 1.0. So, let's say
you set:
zNear = 1.0;
zFar = zNear + diam;
Structure your matrix calls in this order (for an Orthographic projection):
GLdouble left = c.x - diam;
GLdouble right = c.x + diam;
GLdouble bottom c.y - diam;
GLdouble top = c.y + diam;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, zNear, zFar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a
window with aspect ratio = 1.0). If your window isn't square, compute
left, right, bottom, and top, as above, and put in the following logic
before the call to glOrtho():
GLdouble aspect = (GLdouble) windowWidth / windowHeight;
if ( aspect < 1.0 ) { // window taller than wide
bottom /= aspect;
top /= aspect;
} else {
left *= aspect;
right *= aspect;
}
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you
need to add a viewing transform to it.
A typical viewing transform will go on the ModelView matrix and might
look like this:
gluLookAt(0., 0., 2.*diam,
c.x, c.y, c.z,
0.0, 1.0, 0.0);

a "look at" camera is a very convenient technique to make a viewer track an object (hence however the object or camera point moves, the object will stay onscreen). see 'gluLookAt' for a commonly provided implementation, most 3d helper code libraries will have one. You give it a desired camera point, object of interest, and 'up-vector', and it will create an appropriate world (camera transformation) matrix.
Otherwise if you're in control of the object, just don't move it outside of the initial frustum.

Related

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

How to pinch zoom into a certain location in OpenGL ES 2?

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Particles inside a moving Box2d world are getting drawn on top instead of inside a layer

I'm using LevelHelper to build my level, and I'm adding some particles (dynamically initialized CCParticleSystemQuad's) inside my level. All works fine until I move the world (it's a dynamically drawn world in Box2D in which I follow the player with the camera). If I move the world, newly added particles,which are emitting continuously, are drawn at the right position but in the particle-animation afterwards the particles seem to be drawn relatively of the global world/screen position. This gives a weird 'trippy' effect which looks totally unrealistic. The particles should be redrawn/refreshed inside the world
LevelHelperLoader * lh = gameLayer.lh;
LHLayer * layer = [lh layerWithUniqueName:#"MAIN_LAYER"];
NSArray * array = [lh spritesWithTag:WORTEL];
CCParticleSystemQuad * particle;
CGPoint position;
for (LHSprite * sprite in array) {
particle = [CCParticleSystemQuad particleWithFile:#"DirtParticles.plist"];
[layer addChild:particle z:0];
position = sprite.position;
position.y += sprite.contentSize.height * 0.5f;
[particle setPosition:position];
[particle resetSystem];
}
Does anybody know what I might be doing wrong?
Try changing the particle position type:
particle.positionType = kCCPositionTypeFree;
The alternatives are kCCPositionTypeRelative and kCCPositionTypeGrouped. You may have to try all to see which of them best fits your scenario, I'm guessing it's either "free" or "relative".

Why do my raytraced spheres have dark lines when lit with multiple light sources?

I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'.
The code where I'm adding the lighting effects is:
// trace lights
for ( int l=0; l<primitives.count; l++) {
Primitive* p = [primitives objectAtIndex:l];
if (p.light)
{
Sphere * lightSource = (Sphere *)p;
// calculate diffuse shading
Vector3 *light = [[Vector3 alloc] init];
light.x = lightSource.centre.x - intersectionPoint.x;
light.y = lightSource.centre.y - intersectionPoint.y;
light.z = lightSource.centre.z - intersectionPoint.z;
[light normalize];
Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain];
if (primitiveThatWasHit.material.diffuse > 0)
{
float illumination = DOT(normal, light);
if (illumination > 0)
{
float diff = illumination * primitiveThatWasHit.material.diffuse;
// add diffuse component to ray color
colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red;
colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue;
colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green;
}
}
[normal release];
[light release];
}
}
How can I make it look right?
It's a perceptual effect called Mach banding.
You are also very likely viewing the images in the wrong color space. Your ray tracer is doing the lighting math in a "linear" space, but then you are almost certainly viewing those images on a display with a nonlinear response, and therefore not even seeing the correct results. This could easily be making the Mach bands much more prominent than if you were displaying them properly. Try learning about gamma correction.
Your eyes are decieving you. If you move the spheres from the 3 pictures together you will very clearly see that the areas are the same color when single light and brighter when double lit. If you want to make it look nicer I suggest you add a whole arc of light sources between the current ones.
You've saturated one colour channel in the image; turn down the brightness a bit and see what happens.
Are you sure your lighting directions are both normalized?
May be worth it to throw an assert in there.