Basically I'm using this ParallaxCamera class to create the effect in my game, but upon movement, the layers "wiggle". This is especially noticeable when the camera is moving slowly.
I have fixed the timestep and use interpolated smoothing. I use scaled up pixel art. Camera centered on player, updated after movement. Disabling the effect makes moving the camera smooth.
What I guess the issues might be:
the layers move at different paces which means they move at different
times
rounding to display makes the layers assume slightly different positions each frame when moving the camera
Thanks for any help
For low-resolution pixel art, this is the strategy I've used. I draw to a small FrameBuffer at 1:1 resolution and then draw that to the screen. That should take care of jittering.
If your Stage is also at the same resolution, then you have to use a bit of a hack to get input to be processed properly. The one I've used it to use a StretchViewport, but I manually calculate the world width and height to not stretch the world, so I'm basically doing the same calculation that ExtendViewport does behind the scenes. You also have to round to an integer for the width and height. You should do this in resize and apply the width and height using viewport.setWorldWidth() and setWorldHeight(). So in this case it doesn't matter what world size you give to the constructor since it will be changed in update().
When you call update on the viewport in resize, you need to do it within the context of the FrameBuffer you are drawing to. Otherwise it will mess up the screen's frame buffer dimensions.
public void resize(int width, int height) {
int worldWidth = Math.round((float)WORLD_HEIGHT / (float)height * (float)width);
viewport.setWorldWidth(worldWidth);
viewport.setWorldHeight(worldHeight);
frameBuffer.begin();
viewport.update(width, height, true); // the actual screen dimensions
frameBuffer.end();
}
You can look up examples of using FrameBuffer in LibGDX. You should do all your game drawing in between frameBuffer.begin() and end(), and then draw the frameBuffer's color buffer to the screen like this:
stage.act();
frameBuffer.begin();
//Draw game
stage.draw();
frameBuffer.end();
spriteBatch.setProjectionMatrix(spriteBatch.getProjectionMatrix().idt());
spriteBatch.begin();
spriteBatch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2);
spriteBatch.end();
In my case, I do a more complicated calculation of world width and world height such that they are a whole number factor of the actual screen dimensions. This prevents the big pixels from being different sizes on the screen, which might look bad. Alternatively, you can change the filtering of the FrameBuffer's texture to linear and use an upscaling shader when drawing it.
Related
I'm making an application that generates a 2D area (you can think of it as a drawing), with a camera hovering over it. The size of said drawing isn't known in advance, and could change greatly. After the "drawing" is generated, I want to position the camera so that the whole drawing is in view.
My original idea was to calculate the points that are at the top, bottom, left, and right of the drawing and having the camera move back, "zooming out" until they are all in sight, but there has to be a better way, right?
Assuming you are working in 2D (thus orthographic camera mode), you can set the camera's orthographicSize:
Camera.main.orthographicSize = height / 2F; //half of the height of the area
Then, set the aspect ratio (width / height):
Camera.main.aspect = 1F; //for example, a square area
My process looks like this:
define a rectangle I want to draw in, using point dimensions.
define CGFloat scale = [[UIScreen mainsScreen] scale]
Multiply the rectangle's size by the scale
Create an image context of the rectangle size using CGBitmapContextCreate
Draw within the image context
call CGBitmapContextCreateImage
call UIImage imageWithCGImage:scale:orientation: with the appropriate scale.
I had thought this has always resulted in perfect images on both retina and and older screens, but haven't been paying close attention to the line contrast/thickness. Generally, the strokes have a high contrast to the fill so I didn't paid attention until now, with low contrast between a line and fill.
I think perhaps I'm misunderstanding the user space, but I thought it was simply a direct conversion through the scaling, and transforms applied. There are no scaling and transforms applied in my particular case except for the retina screen double scaling.
Trying to render a 2-pixel line rather than 1-pixel is easier to explain: when I call
UIContextSetLineWidth(context, 2), the line is rendered as 1 pixel thick on the retina simulator. 1 pixel! But this should be two pixels, on a retina display.
UIContextSetLineWidth(context, 2 * scale) produces a line that is two pixels wide on a retina screen, but I'm expecting it to be 4 pixels.
UIContextSetLineWidth(context, 1) produces a 1-pixel wide line that is partly transparent. I understand about the stroke straddling the path, so I prefer talking in terms of 2-pixel-wide strokes and the paths being on pixel boundaries.
I need to understand why the rendered line width is being divided in half.
My fault. 99% of my own bugs I solve on my own just after I post publicly about it.
The drawing code includes CGContextClip after constructing and copying a path. After that, a fill may be applied, gradient or otherwise, then the line drawn, so everything is nice and tidy. I was focusing on the math and specific drawing code, and did not notice the clipping line, but that would effectively halve the stroke width. Normally I catch logic bugs like this immediately, but because it was posted to SO, it's appropriate the answer is here too.
I have painting app. Mouse event coordinates are stored to VertexArray. Then vertex array is being drawn to screen. My code structure looks like this
// I get mouse event coordinates and store them to VertexArray
glPushMatrix();
//some new matrix settings
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
glClear(GL_COLOR_BUFFER_BIT);
//now I draw first full size textured quad and later I draw vertexArray
glDrawArrays(.....);
//and now I draw second full size textured quad on top of first quad ant that what have been drawn from vertex array
glPopMatrix();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//immediately after that I draw FBO to screen:
glBindTexture(GL_TEXTURE_2D, fbTexture);
//Code for drawing textured quad
glBindTexture(GL_TEXTURE_2D, 0);
So everything is redrawn every time when new mouse event coordinate is being registered. And if there are more than 1000 coordinates, drawing becomes really slow. Where could be my problem? I thing 1000 vertices for OpenGL is not much
It's not the number of vertices; it's how you're sending them.
First, you never defined "really slow"; often times people will mistakenly think that a change from 400fps to 300fps is "slow". It's not. It only represents a render time increase from 2.5ms-per-frame to 3.3ms, a change of less than a single millisecond. Non-trivial, but probably not something to be too concerned over.
It's always important to measure performance in terms of render time, not FPS.
That being said, your main problem is that you're drawing a single quad at a time. Each one coming from a separate glDrawArrays command. That's not necessarily a good thing, especially if you change state between drawing commands (like binding a texture and so forth).
If you're doing that, then you need to find ways to avoid doing that. What you want to do is render a lot of quads all with one draw calls. This means you have to use the same texture for all of them.
The common solution to this problem is to make a larger texture that has multiple images in different locations. This is commonly called a "texture atlas" (Google that for the details). Each quad would have texture coordinates for the particular image it renders. Text is often drawn in such a way, where each letter (glyph) is stored in the same texture.
I am facing this problem while trying to rotate the map in my iPhone app
The view gets clipped and rotation also happens. I want to avoid the clipping. Any tips ?
heres the code:
viewToRotate.layer.transform = CATransform3DMakeRotation(0.8, 0., 0., 1.);
You need your map rotated in 3D ? If not (which is what I think you need), then just use CGAffineTransformMakeRotation (be careful, as it requires the angle in radians).
Also, if you don't want your map to be clipped, you need to make your map bigger, like in the image below (open in new tab to see it bigger)
http://img593.imageshack.us/img593/4498/calculatemapboundswhenr.png
First, you need to calculate the diagonal of the rectangle (your visible map) as instructed in the image above (which I call "radius" because that would be the radius of the smallest circle bigger than your rectangle).
Second, using the radius, you need to calculate the (smallest) square that will allow your map to be seen without clipping. This square will be used to set the bounds of your map (caution: NOT the frame - Apple specifies that, when using rotation, you should not use frame - just bounds and / or center).
Make sure this square is centered on the center of your visible map rectangle (i.e. the square should have X pixels above AND below the small rectangle ... and Y pixels left AND right of the small rectangle).
Hope it helps !
Did you ever figure out the solution?
The only way I could do it was to make the MapView in Interface Builder much bigger than the actual size of the screen area its supposed to cover, then I centered the MapView such that its center was in the center of the narrower viewable area.
Rotation seemed to work similarly to how it works in the built-in Maps app.
My guess is that you have to do this so that the image tiles streaming in from Google cover a wide enough area to "fill in the blanks" so to speak, even if they're not always visable.
If you apply a little math, you could probably programmatically size and position the MapView such that you void clipping, but don't require more tiles than is absolutely necessary.
I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.