OpenGL ES Sphere alpha texture exported from Blender - objective-c

I am using OpenGL ES 1.1 in iOS 5.0 , and I want to draw a sphere with a texture mapped.
The texture will be a map of the world, which is a .png with an alpha channel.
I want that to see the other part of the globe by the inside.
However, I obtain this strange effect and I don't know why this is happening.
I'm exporting from Blender using this script: https://github.com/jlamarche/iOS-OpenGLES-Stuff/tree/master/Blender%20Export/objc_blend_2.62
I've already tried to reverse the orientation of the normals but it didn't help.
I don't want to activate culling because I want to see both faces.
http://imageshack.us/photo/my-images/819/screenshot20121207at308.png/

Related

Obj-C method to assign colours to pixels directly?

Currently, I am using SKSpriteKit in order to do all of my graphics stuff in any of my programs. Recently, I’ve been interested in drawing things like the Mandelbrot set, Bifurcation curve, etc.
So to draw these on my screen, I use 1 node per pixel… obviously this means that my program has very low performance with over 100000 nodes on the screen.
I want to find a way of colouring in pixels directly with some command without drawing any nodes. (But I want to stick to Obj-C, Xcode)
Is there some way by accessing Core graphics, or something?
Generally you would use OpenGL ES or Metal to do this.
Here is a tutorial that describes using OpenGL ES shaders with SpriteKit to draw the mandelbrot set:
https://www.weheartswift.com/fractals-xcode-6/

OpenGL ES 2.0 GPU accelerated geometry sorting

I have a 3D app that currently uses OpenGL ES 1.1, most meshes are hardwired in the app and are static (they don't move), so depth test allows to draw the transparent geometri efficiently, using the hardwired sorting.
Now I want to load the world from a 3D editor, and add some transparent dynamic objects (the geometry can be in any arbitrary order), that causes the depth test to draw "holes" in the geometry from the back, that is being rendered after the geometry in the front using OpenGL ES 1.1 depth test.
I would migrate to OpenGL ES 2.0 any time soon, so I wonder if there is a GPU accelerated sorting to draw the geometry on the back firts, so that the blending is made in the right way.
OpenGL ES 2.0 doesn't solve any of geometry order problems for you. You still need to sort your objects before issuing OpenGL ES 2.0 draw calls.

Cropping Using OpenGL ES 2.0 iOS (vs. using Core Image)

I'm having difficulties finding any documentation about cropping images using OpenGL ES on the iPhone or iPad.
Specifically, I am capturing video frames at a mildly rapid pace (20 FPS), and need something quick that will crop an image. Is it feasible to use OpenGL here? If so, will it perform faster than cropping using Core Image and its associated methods?
It seems that using Core Image methods, I can't achieve faster than about 10-12 FPS output, and I'm looking for a way to hit 20. Any suggestions or pointers to usage of OpenGL for this?
Obviously, using OpenGl ES will be faster than Core Image Framework. Cropping image will be done by set Texture Coordinate, in generally, Texture Coordinate always like this,
{
0.0f,1.0f,
1.0f,1.0f,
0.0f,0.0f,
1.0f.0.0f
}
The whole image will be drawed with Texture Coordinate above. If you just want upper right part of a image, you can set Texture Coordinate like this,
{
0.5f,1.0f,
1.0f,1.0f,
0.5f,0.5f,
1.0f.0.5f
}
This will get a quater of the whole image at upper right. You never forget that the Coordinate origin of OpenGl ES is at the lower left corner

Bump Mapping in OpenGL ES 2.0/GLKit?

I'm working on a project that displays a number of rotating cubes and would like to learn more about bump and displacement maps. I have a brief understanding of how they work and have already chosen my textures and bump maps by using Crazy Bump beta.
The only tutorial that I have found so far is in the book Pro OpenGL ES for iOS by Mike Smithwick, but it only covers OpenGL ES 1, and I've just got the hang of GLKit; I am able to add textures to my cubes, blended colours and a finger guided spotlight.
Is there a tutorial or guide anybody can point me to that may have more information on how to blend textures and bump maps in OpenGL ES 2?

Using WebGL or OpenGL ES 2, how do I render the contents of an RBO onscreen?

Using WebGL (which is constrained to the OpenGL ES 2 API), I am successfully rendering to texture and then displaying that texture onscreen. Because it is a texture, it is not being antialiased. If I were rendering to an RBO and then displaying that onscreen, I would be able to take advantage of AA.
My render target setup looks like this:
Create FBO
Bind FBO
Create texture (to be rendered to)
Create and bind depth buffer as RBO
Attach texture and RBO to FBO
And my rendering update loop looks like this:
Render the scene to the FBO created in step #2 above
Render a screen aligned quad with the texture created in step #3 above
With desktop OpenGL, I would call glBlitFramebuffer() instead of drawing the screen aligned quad.
How do I render my scene with antialiasing? Do I need to replace the texture with an RBO? If so, what calls do I use to bind the RBO to draw a screen-aligned quad?
You cannot blit the contents of an RBO to screen in WebGL unless you perform a readback and re-upload to texture to blit, which is rather slow.
WebGL has no support for MSAA on FBOs in any form (neither as RBO nor as RTT).
You can implement your own antialiasing in a variety of ways.
Render at 2:2 size and scale down (google maps with webgl does this)
Render at 1:1 size, run a sobel or laplace edge detection on color and depth, and run a bilateral gaussian blur using edge strength as weight (I've used this technique in some of my demos, it works well, http://codeflow.org/entries/2011/apr/11/advanced-webgl-part-1/ )
Use the morphological antialiasing recipe from GPU Pro 2 (I've yet to try that)