I've written an app that relies heavily on iSGL3d for 3D rendering, and I've come to a point now where I need to start fiddling with texture sizes for memory allocation reasons.
My app uses very large textures (1024x1024) and going from that to 512x512 is unacceptable
So, using GL ES 2.0 as a basis, I want to slightly reduce my textures to something closer to 700x700
I know this is possible, because I've painstakingly handwritten OpenGL code in a previous life that uses non-power-of-2 textures
But I've had a hell of a time trying to sift through iSGL3d's code to find where I can affect this change... and the project appears to be abandoned now.
Basically, by default, even if you use a GLES 2.0 instance, iSGL3d will just make a power-of-two bitmap and dump your texture into it, leaving a bunch of transparent pixels. This is worthless.
Forcing the texture size to a non-power-of-two image generates GL errors. I am assuming this is because I am not properly forcing it everywhere it needs to be forced, or iSGL3d isn't properly using GLES 2.0 as it should be
Any pointers at all would be useful...
simply by disabling mipmapping, even valid textures fail to draw
Did you set the minification sampling for these textures to not use the mipmaps? It defaults to mipmap option, so you have to set it to something else if you don't use mipmaps.
e.g.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Related
Im having a weird issue in opengl, it goes like this: im designing a 2d engine, so far i coded the routines that let's you draw sprites, rectangle, boxes, translate and scale them... however when i run a small demo of my engine i notice when scaling gradually rectangles in an animation (drawn using 4 vertices and GL_LINE_LOOP), the rectangle edeges seems to bounce between the two neighboring pixels.
I can't determine the source of the problem or even formulate a proper search query in google, if someone can shed some light on this matter. If my question is not understood please let me know.
Building a 2D library on OpenGL ES is going to be problematic for several reasons. First of all, the Khronos specifications state that it is not intended to produce "pixel perfect" rendering. Every OpenGL ES renderer is allowed some variation in rendered results. This is because the actual rendering is implemented in hardware and floating point rounding can be a little different from platform to platform. Even the shader compilers are completely different from one GPU to the next.
Another issue is that most of the GPUs on mobile devices today are tile-based deferred renderers, and they do not typically support partial screen rendering. In other words, every screen update requires replacing the entire frame.
I would like to draw some of the same figures (with the same texture) on screen (OpenGL ES 2.0). These figures will be different in magnification and minification filters. And different states mipmapping.
The issue is: if I use mipmapping in draw any figure ( if I called glGenerateMipmap() function) I can't switch off mipmapping mode.
Is it possible to switch off mipmapping mode, if I call glGenerateMipmap() at least once?
glGenerateMipmap only generates the smaller mipmap images (based on the top-level image). But those mipmaps are not used for filtering if you don't use a proper mipmapping filter mode (through glTexParamteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_..._MIPMAP_...)). So if you don't want your texture mipmap filtered, just disable it for this particular texture by setting either GL_NEAREST or GL_LINEAR as minification filter. Likewise does not calling glGenerateMipmap not mean that there is no mipmapping going on. A possible mipmapping filter mode (which is also the default for a newly created texture) will still be used, just that the mipmap images contain rubbish (or the texture is actually incomplete, resulting in implementation-defined behaviour, but usually a black texture).
Likewise you shouldn't call glGenerateMipmap each frame before rendering. Call it once after setting the base image of the texture. Like said it generates the mipmap images, those won't go away after they've been generated. What decides if mipmapping is actually used is the texture object's filter mode.
I'm a novice in OpenGL ES 1.1(for IOS) texturing and I have a problem with making the effect of motion blur. During googling, I found that I should render my scene in different time moments to several textures and then draw all these textures on the screen with different alpha values. But the problem is that I don't know how to implement all this!So,my questions are:
How to draw a 2D texture on the screen? Should I make a square and put my texture on it?Or may be, there is a way to draw a texture on the screen directly?
How to draw several textures(one upon another) on the screen with different alpha values?
I've already come up with some ideas, but I'm not sure if they are correct or not.
Thanks in advance!
Well, of course the first advice is, understand the basics before trying to do advanced stuff. Other than that:
Yes indeed, to draw a full-screen texture you just draw a textured screen-sized quad. An orthographic projection would be a good idea in this case, making the screen-alignment of the quad and its proper sizing easier. For getting the textures in the first place (by rendering into them), FBOs might be of help, but I'm not sure they are supported on ES 1 devices, otherwise the good old glCopyTexSubImage2D will do, too, albeit requiring a copy operation.
Well, you just draw multiple textured quads (see 1) one over the other. You might configure the texture environment to scale the texture's color with the quad's base color (glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)) and give your quads a color of (1, 1, 1, alpha) (of course lighting should be disabled). Additionally you have to enable alpha blending (glEnable(GL_BLEND)) and use an appropriate blending function (glBlendFunc(GL_SRC_ALPHA, GL_ONE) should do).
But if all these terms don't tell you anything, you should rather first learn the basics using a good learning resource before delving into more advanced effects.
I am working on an iOS App that visualizes data as a line-graph. The graph is drawn as a CGPath in a fullscreen custom UIView and contains at most 320 data-points. The data is frequently updated and the graph needs to be redrawn accordingly – a refresh rate of 10/sec would be nice.
So far so easy. It seems however, that my approach takes a lot of CPU time. Refreshing the graph with 320 segments at 10 times per second results in 45% CPU load for the process on an iPhone 4S.
Maybe I underestimate the graphics-work under the hood, but to me the CPU load seems a lot for that task.
Below is my drawRect() function that gets called each time a new set of data is ready. N holds the number of points and points is a CGPoint* vector with the coordinates to draw.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
// set attributes
CGContextSetStrokeColorWithColor(context, [UIColor lightGrayColor].CGColor);
CGContextSetLineWidth(context, 1.f);
// create path
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddLines(path, NULL, points, N+1);
// stroke path
CGContextAddPath(context, path);
CGContextStrokePath(context);
// clean up
CGPathRelease(path);
}
I tried rendering the path to an offline CGContext first before adding it to the current layer as suggested here, but without any positive result. I also fiddled with an approach drawing to the CALayer directly but that too made no difference.
Any suggestions how to improve performance for this task? Or is the rendering simply more work for the CPU that I realize? Would OpenGL make any sense/difference?
Thanks /Andi
Update: I also tried using UIBezierPath instead of CGPath. This post here gives a nice explanation why that didn't help. Tweaking CGContextSetMiterLimit et al. also didn't bring great relief.
Update #2: I eventually switched to OpenGL. It was a steep and frustrating learning curve, but the performance boost is just incredible. However, CoreGraphics' anti-aliasing algorithms do a nicer job than what can be achieved with 4x-multisampling in OpenGL.
This post here gives a nice explanation why that didn't help.
It also explains why your drawRect: method is slow.
You're creating a CGPath object every time you draw. You don't need to do that; you only need to create a new CGPath object every time you modify the set of points. Move the creation of the CGPath to a new method that you call only when the set of points changes, and keep the CGPath object around between calls to that method. Have drawRect: simply retrieve it.
You already found that rendering is the most expensive thing you're doing, which is good: You can't make rendering faster, can you? Indeed, drawRect: should ideally do nothing but rendering, so your goal should be to drive the time spent rendering as close as possible to 100%—which means moving everything else, as much as possible, out of drawing code.
Depending on how you make your path, it may be that drawing 300 separate paths is faster than one path with 300 points. The reason for this is that often the drawing algorithm will be looking to figure out overlapping lines and how to make the intersections look 'perfect' - when perhaps you only want the lines to opaquely overlap each other. Many overlap and intersection algorithms are N**2 or so in complexity, so the speed of drawing scales with the square of the number of points in one path.
It depends on the exact options (some of them default) that you use. You need to try it.
tl;dr: You can set the drawsAsynchronously property of the underlying CALayer, and your CoreGraphics calls will use the GPU for rendering.
There is a way to control the rendering policy in CoreGraphics. By default, all CG calls are done via CPU rendering, which is fine for smaller operations, but is hugely inefficient for larger render jobs.
In that case, simply setting the drawsAsynchronously property of the underlying CALayer switches the CoreGraphics rendering engine to a GPU, Metal-based renderer and vastly improves performance. This is true on both macOS and iOS.
I ran a few performance comparisons (involving several different CG calls, including CGContextDrawRadialGradient, CGContextStrokePath, and CoreText rendering using CTFrameDraw), and for larger render targets there was a massive performance increase of over 10x.
As can be expected, as the render target shrinks the GPU advantage fades until at some point (generally for render target smaller than 100x100 or so pixels), the CPU actually achieves a higher framerate than the GPU. YMMV and of course this will depend on CPU/GPU architectures and such.
Have you tried using UIBezierPath instead? UIBezierPath uses CGPath under-the-hood, but it'd be interesting to see if performance differs for some subtle reason. From Apple's Documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I'd would also try setting different properties on the CGContext, in particular different line join styles using CGContextSetLineJoin(), to see if that makes any difference.
Have you profiled your code using the Time Profiler instrument in Instruments? That's probably the best way to find where the performance bottleneck is actually occurring, even when the bottleneck is somewhere inside the system frameworks.
I am no expert on this, but what I would doubt first is that it could be taking time to update 'points' rather than rendering itself. In this case, you could simply stop updating the points and repeat rendering the same path, and see if it takes nearly the same CPU time. If not, you can improve performance focusing on the updating algorithm.
If it IS truly the problem of the rendering, I think OpenGL should certainly improve performance because it will render all 320 lines at the same time in theory.
I have made an app similar to this one: http://www.youtube.com/watch?v=U2uH-jrsSxs (the sound is a bit loud and bad). The problem is there is a very thin line/dots/whatever appearing at the bottom of every texture. It is almost unnoticeable but it is there and I have no idea why. My texture size is 256x256. I tested earliear with a texture size 128x128 I THINK there was nothing there but not sure. It's not such a big deal as it is very thin but I find it annoying. Here is a screenshot. I have selected with RED those lines. I'm a noob at OpenGL(ES) so probably I did something wrong. Any help is appreciated.
This will be due to OpenGL tiling the texture to fill the specified area. So the thin line you are seeing will be the very top of that texture just starting to repeat again.
To avoid it, tell the texture to CLAMP, rather than REPEAT (repeat being synonymous with tiling). Textures repeat by default, so you will want a line something like this:
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
If you're this way inclined, there is also a no-code-involved bodge way around it. Simply edit your source graphics so that no pixels are present in the top or left edges. So move the whole lot down one pixel and right one pixel inside its canvas. But then of course you will need to adjust your coordinates if you want the images to appear in exactly the same place.