I am rendering my scene into a texture so that I can apply post-processing before displaying the final result. However, when I added this feature, MSAA/CSAA stopped working. Is there a way (other than performance intensive FSAA) to get anti-aliasing to work?
I am targeting multiple platforms (android 2.2+, iphone 3gs+, all ipads), so I am looking for a way to do this without requiring extensions (unless they are ubiquitous).
Yes, MSAA works by default on default frame buffer only. You should use multisample textures .This thread can be helpful:
Multisampled render to texture in ios
Related
My app has a toolbar which is normally 64 pixels height. On OS X (with a retina display) the toolbar's height still equals to 64 (logical) pixels.
If I pass 64x64 bitmap when creating a wxBitmapButton I get a blurry image (which is expected), so I need to pass somehow a 128x128 bitmap.
When I pass it, it's just shown cropped without proper scaling. So how can I use wxBitmapButton to show high-quality bitmap?
I know this is a rather old question by now, but there is finally a good answer to it now if you're using the latest versions of wxWidgets from Git or 3.1.6+ once it will have been released.
The answer consists in using wxBitmapBundle which is basically a smart container for bitmaps to be used in different resolutions/at different DPI scale factors. In the simplest case, which is sufficient under Mac, you just need to create a bundle from the two bitmaps, to be used at normal (or 100%) and high (or 200%) DPI by using wxBitmapBundle::FromBitmaps(bmpNormal, bmpHigh) and pass this object to wxBitmapButton::SetBitmap().
This also works with wxToolBar tools, wxStaticBitmap and other (although not yet all) classes. And the bitmap bundle can be created from resources, which is especially convenient under Mac, as you can just have normal.png and normal#2x.png files in your application bundle (which has nothing to do with wxBitmapBundle, it's just an unfortunately overloaded term) Resources subdirectory.
Im having a weird issue in opengl, it goes like this: im designing a 2d engine, so far i coded the routines that let's you draw sprites, rectangle, boxes, translate and scale them... however when i run a small demo of my engine i notice when scaling gradually rectangles in an animation (drawn using 4 vertices and GL_LINE_LOOP), the rectangle edeges seems to bounce between the two neighboring pixels.
I can't determine the source of the problem or even formulate a proper search query in google, if someone can shed some light on this matter. If my question is not understood please let me know.
Building a 2D library on OpenGL ES is going to be problematic for several reasons. First of all, the Khronos specifications state that it is not intended to produce "pixel perfect" rendering. Every OpenGL ES renderer is allowed some variation in rendered results. This is because the actual rendering is implemented in hardware and floating point rounding can be a little different from platform to platform. Even the shader compilers are completely different from one GPU to the next.
Another issue is that most of the GPUs on mobile devices today are tile-based deferred renderers, and they do not typically support partial screen rendering. In other words, every screen update requires replacing the entire frame.
I'm having difficulties finding any documentation about cropping images using OpenGL ES on the iPhone or iPad.
Specifically, I am capturing video frames at a mildly rapid pace (20 FPS), and need something quick that will crop an image. Is it feasible to use OpenGL here? If so, will it perform faster than cropping using Core Image and its associated methods?
It seems that using Core Image methods, I can't achieve faster than about 10-12 FPS output, and I'm looking for a way to hit 20. Any suggestions or pointers to usage of OpenGL for this?
Obviously, using OpenGl ES will be faster than Core Image Framework. Cropping image will be done by set Texture Coordinate, in generally, Texture Coordinate always like this,
{
0.0f,1.0f,
1.0f,1.0f,
0.0f,0.0f,
1.0f.0.0f
}
The whole image will be drawed with Texture Coordinate above. If you just want upper right part of a image, you can set Texture Coordinate like this,
{
0.5f,1.0f,
1.0f,1.0f,
0.5f,0.5f,
1.0f.0.5f
}
This will get a quater of the whole image at upper right. You never forget that the Coordinate origin of OpenGl ES is at the lower left corner
We've now got 4-resolutions to support and my app needs at least 6 full-screen background images to be pretty. Don't want to break the bank on megabytes of images.
I see guides online about loading PDFs as images and custom SVG libraries but no discussion of prectically.
Here's the question: considering rendering speed and file size, what is the bet way to use vector images in iOS? And in addition, are there any practical caching or other considerations one should make in real world app development?
Something to consider for simple graphics, such as the type of thing used for backgrounds, etc., is just to render them at runtime using CG.
For example, in one of our apps, instead of including the typical repeating background tile image in all the required resolutions, we instead draw it once into a CGPatternRef, then convert it to a UIColor, at which point things become simple.
We still use graphic files for complex things, but for anything that's simple in nature, we just render it at runtime and cache the result, so we get resolution independence without gobs of image files. It's also made maintenance quite a bit easier.
I'm trying to build a weather application on the iPad but it seems that I need some help in animation. Say I'm animating a Radar, so the radar source files have 10 gif/jpeg pictures in 900x700 pixel size. I've tried the UIImage animation technique using the tutorial here:
http://www.icodeblog.com/2009/07/24/iphone-programming-tutorial-animating-a-game-sprite/
but it seems that loading 10 images that big is too much for the iPad to handle and its crashing due to memory warnings. I'm researching other techniques to animate but I can't seem to find something that will do this efficiently.
I've looked at others like Core Animation using sprites, and Cocos2D with sprites. Can someone point in the right direction the best way to animate these big images? (keep in mind that these images are dynamic and changes often so the sprites will have to be recreated on a server and fetched from the iPad to do the animation). Thanks
OpenGL only creates textures with dimensions at powers of 2. In the case of your images, that's 1024x1024, which is a meg of memory per image. Still, that shouldn't be a problem with the iPad.
First, investigate using Xcode's profiling tools to ensure the images aren't being repeatedly loaded into memory at each loop of the animation (likely by way of new objects that aren't sharing cached textures). That could solve your problem from the start.
Second, I recommend using Cocos2D if only for the easy handling of textures and caching. Toss the images into a CCAnimation, pop that into a CCRepeatForever, run it with a CCSequence. When you're done hit CCTextureCache to release unused textures.
Third, lower your animation framerate to 30 or less (if only for this animation). It may be the iPad, but you making a weather app. Not a video game.
Finally, downgrade the size of your image. Justify all you want, but a large radar animation will not sell your app. And just because a website might already be playing that animation beautifully, remember that a desktop has vastly more memory and power than any smart phone.
Try breaking the animation image into into smaller parts and animate those instead by treating each components as sprites. It would be best if you use primarily code (CoreGraphics) and draw your radar "by hand" instead of just using images as if they were animated GIFs.