I've seen that texture atlases are used for animations, but would it be appropriate to use for storing UI elements or items that are unrelated?
Short answer is yes, it can be appropriate. You can put them in a texture atlas if they are related UI elements.
From Apple's Documentation:
Using Texture Atlases to Collect Related Art Assets
Art assets stored in your app bundle aren’t always unrelated images. Sometimes they are collections of images that are being used together for the same sprite. For example, here are a few common collections of art assets:
Animation frames for a character
Terrain tiles used to create a game level or puzzle
Images used for user interface controls, such as buttons, switches,
and sliders
If each texture is treated as a separate object, then Sprite Kit and the graphics hardware must work harder to render scenes—and your game’s performance might suffer. Specifically, Sprite Kit must make at least one drawing pass per texture. To avoid making multiple drawing passes, Sprite Kit uses texture atlases to collect related images together. You specify which assets should be collected together, and Xcode builds a texture atlas automatically. Then, when your game loads the texture atlas, Sprite Kit manages all the images inside the atlas as if they were a single texture. You continue to use SKTexture objects to access the elements contained in the atlas.
Related
TL;DR: From within my MTKView's delegate drawInMTKView: method, part of my rendering pass involves adding an MPSImageBilinearScale performance shader and zero or more MTLBlitCommandEncoder requests for generateMipmapsForTexture. Is that a smart thing to do from within drawInMTKView:, which happens on the main thread? Do either of them block the main thread while running or are they only being encoded and then executed later and entirely on the GPU?
Longer Version:
I'm playing around with Metal within the context of an imaging application. I use Core Image to load an image and apply filters. The output image is displayed as a 2D plane in a metal view with a single texture. This works, but to improve performance I wanted to experiment with Core Image's ability to render out smaller tiles at a time. Each tile is rendered into its own IOSurface.
On each render pass, I check if there are any tiles that have been recently rendered. For each rendered tile (which is now an IOSurface), I create a Metal texture from a CVMetalTextureCache that is backed by the surface.
I think use a scaling MPS to copy from the tile-texture into the "master" texture. If a tile was copied over, then I issue a blit command to generate the mipmaps on the master texture.
What I'm seeing is that if my master texture is quite large, then generate the mipmaps can take "a bit of time". The same is true if I have a lot of tiles. It appears this is blocking the main thread because my FPS drops significantly. (The MTKView is running at the standard 60fps.)
If I play around with tile sizes, then I can improve performance in some areas but decrease it in others. For example, increasing the tile size that Core Image renders it creates less tiles, and thus less calls to generate mipmaps and blits, but at the cost of Core Image taking longer to render a region.
If I decrease the size of my "master" texture, then mipmap generation goes faster since only the dirty textures are updates, but there appears to be a lower bounds on how small I should make the master texture because if I make it too small, then I need to pass in a large number of textures to the fragment shader. (And it looks like that limit might be 128?)
What's not entirely clear to me is how much of this I can move off the main thread while still using MTKView. If part of the rendering pass is going to block the main thread, then I'd prefer to move it to a background through so that UI elements (like sliders and checkboxes) remain fully responsive.
Or maybe this isn't the right strategy in the first place? Is there a better way to display really large images in Metal other than tiling? (i.e.: Images larger than Metal's texture size limit of 16384?)
When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!
For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?
Multiple methods
There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.
Projection
The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).
From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.
Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:
Ray tracing
You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:
Ray casting
Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm
Storing the data
3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.
Polygons
This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.
The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.
Here is an example of a polygon model:
Voxels
This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:
Image by Wikipedia user Vossman
In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.
In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.
I am working on a game where I need to create an enormous "world" over which a rocket ship flies. On this world, I have to programmatically generate various simple background elements. There needs to be a record of the path, so I can't use parallax effects to give the feeling of motion.
When I create the background elements using SKShapeNode and SKLabelNode objects, I get terrible performance, because I literally have to add thousands of these nodes.
Is there a way to draw text and lines directly onto a sprite object?
I have been reading for several hours now documentation about drawing two dimensional graphics in a objective-c cocoa application. There appears to be several different technologies all specific to certain tasks. My understanding is that the following technologies do the following things. Please correct me if I'm wrong.
Quartz 2D: The primary library for drawing shapes, text, and images to the screen.
Core Graphics: this is the name of the framework that contains Quartz. This can be used as a synonym for Quartz.
QuartzGL: A GPU acceleration mode for Quartz that is not enabled by default and not necessarily faster for drawing things on the screen.
OpenGL: The most low level library, talk directly to the graphics card at the cost of more lines of code. More suited for 3D graphics.
Core Image: A library for displaying images and text, but not so much for drawing shape primitives.
Core Animation: A library for automatically animating objects. Apparently not suited for moving large numbers of objects. Nor for continuous animation of objects.
QuickTime: A library that apparently also does images and text in addition to video, but probably not good for drawing primitive shapes.
What I would like to do is create a browser for some specific type of data. The view would not very complicated and would consist of drawing rectangles at specific locations. However, the user should be able to move around by dragging the view to the left or the right and the this movement should be fluid. Here is a example that is very close to what I'm trying to make:
http://jbrowse.org/ucsc/hg19/
What drawing technology would you recommand I start coding with?
You want Quartz. Unless your graphing MASSIVE amounts of data, any Mac (I'm assuming Mac not iOS) should handle it easily. It is easy, efficient, and will probably get you where you need to go. For the dragging movement, you'll probably manage that with Core Animation layers.
Note: Everything in the end is handled by AppKit (Mac) or UIKit (iOS) and, eventually, Core Animation. If you're doing graphics, you will encounter Core Animation at some point, as it manages everything displayed.
Note: If you are graphing that much data, you can use OpenGL, but even then, the need shouldn't be too much until you start displaying, probably many millions of vertices or complex visualisations.
I'm developing an iPhone Cocos2D game and reading about optimization. some say use spritesheet whenever possible. others say use atlassprite whenever possible and others say sprite is fine.
I don't get the "whenever possible", when each one can and can't be used?
Also what is the best case for each type?
My game will typically use 100 sprites in a grid, with about 5 types of sprites and some other single sprites. What is the best setup for that? guidelines for deciding for general cases will help too.
Here's what you need to know about spritesheets vs. sprites, generally.
A spritesheet is just a bunch of images put together onto one big image, and then there will be a separate file for image location data (i.e. image 1 starts at coordinate 0,0 with a size of 100,100, image 2 starts at coordinate 100,0, etc).
The advantage here is that loading textures (sprites) is a pretty I/O and memory-alloc intensive operation. If you're trying to do this continually in your game, you may get lags.
The second advantage is memory optimization. If you're using transparent PNGs for your images, there may be a lot of blank pixels -- and you can remove those and "pack" your texture sizes way down than if you used individual images. Good for both space & memory concerns. (TexturePacker is the tool I use for the latter).
So, generally, I'd say it's always a good idea to use a sprite sheet, unless you have non-transparent sprites.