OpenGL ES 2.0 GPU accelerated geometry sorting - opengl-es-2.0

I have a 3D app that currently uses OpenGL ES 1.1, most meshes are hardwired in the app and are static (they don't move), so depth test allows to draw the transparent geometri efficiently, using the hardwired sorting.
Now I want to load the world from a 3D editor, and add some transparent dynamic objects (the geometry can be in any arbitrary order), that causes the depth test to draw "holes" in the geometry from the back, that is being rendered after the geometry in the front using OpenGL ES 1.1 depth test.
I would migrate to OpenGL ES 2.0 any time soon, so I wonder if there is a GPU accelerated sorting to draw the geometry on the back firts, so that the blending is made in the right way.

OpenGL ES 2.0 doesn't solve any of geometry order problems for you. You still need to sort your objects before issuing OpenGL ES 2.0 draw calls.

Related

Obj-C method to assign colours to pixels directly?

Currently, I am using SKSpriteKit in order to do all of my graphics stuff in any of my programs. Recently, I’ve been interested in drawing things like the Mandelbrot set, Bifurcation curve, etc.
So to draw these on my screen, I use 1 node per pixel… obviously this means that my program has very low performance with over 100000 nodes on the screen.
I want to find a way of colouring in pixels directly with some command without drawing any nodes. (But I want to stick to Obj-C, Xcode)
Is there some way by accessing Core graphics, or something?
Generally you would use OpenGL ES or Metal to do this.
Here is a tutorial that describes using OpenGL ES shaders with SpriteKit to draw the mandelbrot set:
https://www.weheartswift.com/fractals-xcode-6/

opengl es 2.0 drawing imprecision

Im having a weird issue in opengl, it goes like this: im designing a 2d engine, so far i coded the routines that let's you draw sprites, rectangle, boxes, translate and scale them... however when i run a small demo of my engine i notice when scaling gradually rectangles in an animation (drawn using 4 vertices and GL_LINE_LOOP), the rectangle edeges seems to bounce between the two neighboring pixels.
I can't determine the source of the problem or even formulate a proper search query in google, if someone can shed some light on this matter. If my question is not understood please let me know.
Building a 2D library on OpenGL ES is going to be problematic for several reasons. First of all, the Khronos specifications state that it is not intended to produce "pixel perfect" rendering. Every OpenGL ES renderer is allowed some variation in rendered results. This is because the actual rendering is implemented in hardware and floating point rounding can be a little different from platform to platform. Even the shader compilers are completely different from one GPU to the next.
Another issue is that most of the GPUs on mobile devices today are tile-based deferred renderers, and they do not typically support partial screen rendering. In other words, every screen update requires replacing the entire frame.

Cropping Using OpenGL ES 2.0 iOS (vs. using Core Image)

I'm having difficulties finding any documentation about cropping images using OpenGL ES on the iPhone or iPad.
Specifically, I am capturing video frames at a mildly rapid pace (20 FPS), and need something quick that will crop an image. Is it feasible to use OpenGL here? If so, will it perform faster than cropping using Core Image and its associated methods?
It seems that using Core Image methods, I can't achieve faster than about 10-12 FPS output, and I'm looking for a way to hit 20. Any suggestions or pointers to usage of OpenGL for this?
Obviously, using OpenGl ES will be faster than Core Image Framework. Cropping image will be done by set Texture Coordinate, in generally, Texture Coordinate always like this,
{
0.0f,1.0f,
1.0f,1.0f,
0.0f,0.0f,
1.0f.0.0f
}
The whole image will be drawed with Texture Coordinate above. If you just want upper right part of a image, you can set Texture Coordinate like this,
{
0.5f,1.0f,
1.0f,1.0f,
0.5f,0.5f,
1.0f.0.5f
}
This will get a quater of the whole image at upper right. You never forget that the Coordinate origin of OpenGl ES is at the lower left corner

OpenGL ES Sphere alpha texture exported from Blender

I am using OpenGL ES 1.1 in iOS 5.0 , and I want to draw a sphere with a texture mapped.
The texture will be a map of the world, which is a .png with an alpha channel.
I want that to see the other part of the globe by the inside.
However, I obtain this strange effect and I don't know why this is happening.
I'm exporting from Blender using this script: https://github.com/jlamarche/iOS-OpenGLES-Stuff/tree/master/Blender%20Export/objc_blend_2.62
I've already tried to reverse the orientation of the normals but it didn't help.
I don't want to activate culling because I want to see both faces.
http://imageshack.us/photo/my-images/819/screenshot20121207at308.png/

Why are OpenGL ES and cocos2D faster than Cocoa Touch / iOS frameworks itself?

I wonder if cocos2D is built on top of iOS's frameworks, won't cocos2D be slightly slower than using the Cocoa framework directly? (is cocos2D on top of OpenGL ES, which in turn is on top of Cocoa Touch / iOS frameworks including Core Animation and Quartz?).
However, I heard that OpenGL ES is actually faster than using Core Graphics, Core Animation, and Quartz?
So is OpenGL ES the fastest, cocos2D the second, and Core Animation the slowest? Does someone know why using OpenGL ES is faster than using Cocoa framework directly?
cocos2D is built on top of OpenGL. When creating a sprite in cocos2D, you are actually creating a 3D model and applying a texture to it. The 3D model is just a flat square and the camera is always looking straight at it which is why it all appears flat and 2D. But this is why you can do things like scaling and rotating sprites easily - all you are really doing is rotating the 2D square (well, two triangles really) or moving them closer or further away from the camera. But Cocos2D handles all that for you.
OpenGL is designed from the start to pump out 3D graphics very very quickly. So it is designed to handle shoving points and triangles around. This is then enhanced by a 3D rendering hardware which it can use specifically for this. As this is all it does, it can be very optimised for doing all the maths on the points that build up the objects and mapping textures onto those object. It doesn't have to worry about handling touches or other system things that Cocoa does.
Cocoa Touch doesn't use openGl. It may use some hardware acceleration, but it isn't designed for that - it's designed for creating 2D buttons, etc. What it does, it does well, but it has lots of layers to pass through to do what it needs to do which doesn't make it as efficient as something designed just for graphics (openGL).
OpenGL is the fastest
cocos2D is slightly slower, but only because there are some wrappers to make your life easier. If you were to do the same thing, then you may get it faster, but with the cost of flexibility.
Core Animation is the slowest.
But they all have their uses and are excellent in their individual niche areas.