I've developed a few simple iOS apps, and I'm looking to develop a simple maze game. What I'm looking to do is really basic, so I'm looking for some guidance on which way to go.
Here is basically what I'm trying to do: simple escape-the-maze game where I read in a 2 dimensional array and that makes my maze.
I want the perspective to be standing in the middle of each square, and when you flick or hit a button, it will advance one square, and you can turn left or right. I'd like to be able to apply textures to the ways based on the numbers in the array, and I'd really like to see the movement of the wall moving towards me then stop.
The walls will always be the same height, and the maze will be simple square blocks, although I do want it to be able to hand rooms that might be more than 2x2 , like maybe 4x4, and be able to handle rendering that.
I'm not sure if trying to do something in OpenGL would be overkill since my requirements are really basic. Like I mentioned, it won't be a free move; it will be advance one square each time you hit a button and turn left or right.
I have read something about ray casting for this sort of thing but am not sure how I would accomplish this in iOS. Also I'd like to have the maze not take up the whole screen, maybe 2/3, while the rest is standard iOS controls like buttons and labels, and a background behind it.
What books or articles should I read to help me? I'd prefer not to use a 3rd party engine since this seems really basic.
You can do this using Core Animation, which is much easier than OpenGL. If you import QuartzCore into your project, you can position any view in 3D as follows:
//set the view's 2D position so that the vanishing point is the middle
//of the screen - use the same centre for every view
view.center = CGPointMake(window.bounds.size.width / 2.0f, window.bounds.size.height / 2.0f);
//create a 3d transform
CATransform3D transform = CATransform3DIdentity;
transform.m34 = -1.0f/500.0f; // this sets the 3D perspective
//you can transform the view in 3D relative to the camera
//this rotates the view by 45 degrees about the Y axis
//but you can also scale, translate, etc using equivalent functions
transform = CATransform3DRotate(transform, M_PI_4, 0.0f, 1.0f, 0.0f)
//transform the view in 3D
view.layer.transform = transform;
So if you create your walls as UIImageViews, you can arrange them into a room by transforming each one individually using the logic above.
Related
How can one create a scene/terrain like clash royal's game scene in blender? The game scene seems like top view but is not exactly top view. At what angle should the camera be placed to get that effect?
You would be looking for isometric projection or maybe 2.5D, also called 3/4 perspective. You can find several isometric tutorials on youtube.
While I'm not sure if there are any strict rules to follow unless you are making graphics that must fit in an existing game, it is common to use an orthographic camera at about a 45 degree angle.
Clash Royale looks like it could be close to a 45 degree orthographic camera angle.
I'm a novice in OpenGL ES 1.1(for IOS) texturing and I have a problem with making the effect of motion blur. During googling, I found that I should render my scene in different time moments to several textures and then draw all these textures on the screen with different alpha values. But the problem is that I don't know how to implement all this!So,my questions are:
How to draw a 2D texture on the screen? Should I make a square and put my texture on it?Or may be, there is a way to draw a texture on the screen directly?
How to draw several textures(one upon another) on the screen with different alpha values?
I've already come up with some ideas, but I'm not sure if they are correct or not.
Thanks in advance!
Well, of course the first advice is, understand the basics before trying to do advanced stuff. Other than that:
Yes indeed, to draw a full-screen texture you just draw a textured screen-sized quad. An orthographic projection would be a good idea in this case, making the screen-alignment of the quad and its proper sizing easier. For getting the textures in the first place (by rendering into them), FBOs might be of help, but I'm not sure they are supported on ES 1 devices, otherwise the good old glCopyTexSubImage2D will do, too, albeit requiring a copy operation.
Well, you just draw multiple textured quads (see 1) one over the other. You might configure the texture environment to scale the texture's color with the quad's base color (glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)) and give your quads a color of (1, 1, 1, alpha) (of course lighting should be disabled). Additionally you have to enable alpha blending (glEnable(GL_BLEND)) and use an appropriate blending function (glBlendFunc(GL_SRC_ALPHA, GL_ONE) should do).
But if all these terms don't tell you anything, you should rather first learn the basics using a good learning resource before delving into more advanced effects.
I'm trying to write a fairly simple animation using Core Animation to simulate a book cover being opened. The book cover is an image, and I'm applying the animations to its layer object.
As well as animating a rotation around the y axis (The the anchorPoint property set of the left of the screen), I need to scale the right hand edge of the image up so it appears to "get closer" to the user and create the illusion of depth. Flipboard, for example, does this scaling really well.
I can't find any way of scaling an image like this, so only one edge is scaled and the image ends up nonrectangular.
All help appreciated with this one!
CoreAnimation, by default, "flattens" its 3D hierarchy into a 2D world at z=0. This causes perspective and the like to not work properly. You need to host your layer in a CATransformLayer, which will render its sublayers as a true 3D layer hierarchy.
I am trying to develop an iOS app to make any given image (UIImage) warp on selected locations.
So for this task to be accomplished what should be the rightmost way going forward, for now i'm doing some research on doing this on OpenGL (frankly any heads up on the framework would be nice too).
So finally the requirement is to get the UIImage warp on some given locations. (If x, y coordinates are there)
If you're sufficiently familiar with (or willing to learn) OpenGL, then you could do this:
Create a flat, rectangular grid of points to be a mesh that will be displayed with OpenGL.
Apply the image to the mesh as a texture.
When distorting the image at a particular location, you can just decide which points on the mesh will be affected by the distortion, and move them.
You can push points out from the center, or in toward a center, or shift them all in the same direction. If the distortion affects a large area, then you change a lot of points (possibly changing those in the center by more than those near the edges of the affected area).
Not sure what you mean by 'warp'. Do you mean skew it in 3 dimensions? If so you can adjust the CGAffineTransform for the UIImageView you are displaying it in to get that effect.
If you mean some kind of image processing warp, and you are using iOS 5, you can use Core Image for that.
I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.