I am using wxWidgets to design a GUI that draws multiple layers with transparency on top of each other.
Therefore I have one method for each layer that draws with wxGraphicsContext onto the "shared" wxImage, which is then plotted to the wxWindow in the paintEvent method.
I have the layer data in arrays exactly of the same dimension as my wxImage and therefore I need to draw/manipulate pixel-wise, of course. Currently I am doing that with the drawRectangle-routine. My guess is that this is quite inefficient.
Is there a clever way to manipulate wxImage's pixel data directly, enabling me to still use transparency of each separate layer in the resulting image? Or is the 1x1 pixel drawing with drawRectangle sufficient?
Thanks for any thoughts on this!
You can efficiently manipulate wxImage pixels by just directly accessing them, they are stored in two contiguous RGB and alpha arrays which you can work with directly.
The problem is usually converting this wxImage to wxBitmap which can be displayed -- this is the expensive operation, and to avoid it raw bitmap access can be used to manipulate wxBitmap directly instead.
Related
When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!
For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?
Multiple methods
There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.
Projection
The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).
From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.
Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:
Ray tracing
You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:
Ray casting
Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm
Storing the data
3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.
Polygons
This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.
The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.
Here is an example of a polygon model:
Voxels
This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:
Image by Wikipedia user Vossman
In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.
In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.
I need to draw textured quad. My texture has some alpha pixels. So I need to do glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Thats OK. But I need some other blending function on that quad (glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);) to achieve textures masking. How can I do it? Because if I set both glBlendFunc, one of them is ignored.
Blending is a framebuffer operation and can not be set per primitive. If you need to combine several texture layers on a single primitive, do this in a shader and emit a compound color/alpha that interacts in the right way with the choosen blending function. If you need different blending functions, you must do this using separate drawing calls.
I'm in the planning stage of writing a Cocoa drawing application (for Mac, not iOS), and I'm trying to discern whether one of my features is technically possible via any of the drawing frameworks. Any help or relevant information would be greatly appreciated.
The idea is to apply a 3D transformation to an object drawn with Quartz2D. I've considered capturing the relevant portion of the canvas View (where objects are drawn) as an image and sending it to Core Animation, but that doesn't seem like the best option. Since this is a drawing application, it's less about 3D animation than it is about the transformed shape. This solution is also less than ideal because I assume that if the 2D object were a vector path rather a bitmap image, I would have to rasterize it to apply such a transformation. The ideal implementation would enable the user to dynamically rotate a flat object in 3 dimensions until she found a suitable orientation, lock in this transformation, and still be able to manually adjust the path's vector points.
Is this feasible? Would it require working directly with OpenGL? Help of any kind is most welcome.
Thank you!
Seems to me that anything you'd do with a 3D transform, you should be able to do with multiple affine transforms. See UIBezierPath's -applyTransform method.
I know that the CGContext cannot call it to draw directly, and it needs to fill the drawing logic in the drawInContext, and call the CGContext to draw using "setNeedsDisplay", so, I designed a cmd to execute, but it cause some problems... like this :
Why I can't draw in a loop? (Using UIView in iPhone)
I think the CGContext is very different from my previous programming experience....(I used HTML5 canvas, that allow me add more details, after I draw, so do the Java Swing)
Actually, I want to know what is the suitable to implement these kind of thing in Apples' programmer mind. Thz.
There are three approaches to what you're asking. You can draw everything in drawRect:, you can manage multiple layers, or you can draw in an image. Each has advantages, but first you need to think correctly about the problem so that you don't destroy performance.
Drawing happens constantly. Every time anything changes, there may be quite a lot of drawing that has to be done. Not the whole screen usually, but still a lot of drawing. Since drawRect: and drawInContext: can be called many times, they need to be efficient. That means that you don't want to do a lot of expensive calculations, and you don't want to do a lot of useless drawing. "Useless" means "won't actually be displayed because it's off screen or obscured by other drawing."
So in the usual case, you put your actual drawing code in drawRect:, but you do all your calculations elsewhere, generally when your data changes. For example, you read your files, figure out your coordinates, create CGPaths, etc whenever your data changes (which should be much less frequent then drawing). You save all the results into ivars, and then in drawRect: you just draw the final result. So in your loop example, you would probably have an NSArray of images in your view object, and in drawRect: you would draw them all in order.
Another approach is to create a separate layer for each image, set the image as the content, and then attach the layer to the view. You're done at that point. There is no more drawing code you need to write. Quartz handles layers very efficiently, so this can be a very good solution to a wide variety of problems.
Finally, you can composite everything into an image, and then stick that image in an image view, or draw the image directly in the view, or attach the image to a layer. This is a good solution if you have very complicated drawing (particularly using CGPath). This can be expensive if you're constantly changing things, since you have to create a new image context, draw the old image into the new context, draw on top of it, and then create a new image from the context. But it's good for a complicated drawing that doesn't change often.
But you're correct, CGContext is not like a canvas. It needs to be redrawn every draw cycle. You can do that yourself, or you can use another view object (like UIImageView) to do it for you. But it has to be done one way or another.
I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.