I knew from a website that rectangle texture can be used for screen-sized images. However, for the function glGetTexParameterfv, only GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP are accepted. Does it mean that rectangle texture, or GL_TEXTURE_RECTANGLE, is not supported by OpenGL ES 2.0? And I am wondering whether it means only normalized texture coordinate can be used in shaders? Thank you.
About GL_TEXTURE_RECTANGLE
Texture coordinates is just a interpolated value between vertices. But for 2D texture it's value in 0.0 to 1.0 range (if you do not tiling texture). For texture cube its vector (it can be non-normalized)
Related
I have a Poliigon Texture Demo c4d file. The file includes a sphere with a texture which renders correctly (bottom sphere in image). However when I create a sphere (top sphere in image), convert it to a polygonal object and apply the same texture it is being stretched horizontally.
I can fix this by changing the "Length U" setting to 50% in the Texture Tag but I notice that the sphere below does not need this modification so I was wondering how to convert the top sphere to a polygonal object the same way the bottom sphere is.
Cinema 4d Example
I have included a screengrab. The only notable difference is that the sphere below has additional diagonal division.
I am quite new to 3D so hope this all makes sense.
I think you only need to change the Sphere's Type, to a triangular type, like the sphere at the bottom.
If this helps, please consider up-voting and marking you question as solved
I have a mesh (close to polygon soup) with texture coordinates. I'd like to use CGAL for various operaitons on this mesh; most specifically the Naf_polyhedron class. I can "thicken" each triangle to make sure it's manifold and acceptable as a Naf, but I don't know how to carry the texture coordinates through the operations so they are preserved for vertices, and interpolated for cut edges.
Also, a single "point" may have multiple texture coordinates, as the texture coordinate is a function of both "face" and "point."
Are there examples or documentation for how to do this? Or does CGAL not support this in a mostly-built-in fashion?
From OpenGL ES 2.0 specification section 4.4.5:
"Formats not listed in table 4.5, including compressed internal formats. are not
color-, depth-, or stencil-renderable, no matter which components they contain."
Then there are extensions that extend this table such as:
OES_depth24
OES_depth32
OES_rgb8_rgba8
ARM_rgba8
If I understood the specification correctly, table 4.5 affects both texture and renderbuffer formats. And in that case, for example, RGB and RGBA textures with 8-bits per component are not color-renderable unless the extension OES_rgb8_rgba8 (or ARM_rgba8 for RGBA) is supported.
On a test device that supports OES_rgb8_rgba8 the following texture formats are valid, i.e., framebuffer complete when attached to FBO as the color attachment:
RGB 565
RGB 888
RGBA 4444
RGBA 5551
RGBA 8888
And these were not:
Alpha 8
Luminance 8
LuminanceAlpha 88
The results match my assumptions (at least on 1 device) but I would like to know if I understood the specification correctly or is this working by accident?
Yes, your assumptions are correct.
The list of renderable formats in the official specification are just what an OpenGL ES 2.0 implementation is required to support. Most support many more than what is listed there.
However, no implementation of OpenGL / OpenGL ES supports alpha or luminance textures as color-renderable. You can replicate the behavior with texture swizzle extensions and/or GLSL vector swizzling. The extension: EXT_texture_rg adds Red and Red/Green image formats that are useable by textures and renderbuffers. These two formats are very useful when you want to draw into a one or two-channel image format using an FBO (since GL_LUMINANCE, GL_ALPHA and GL_LUMINANCE_ALPHA are not color-renderable formats).
Generally speaking, the set of renderbuffer image formats is a subset of texture image formats. You can use compressed image formats (optional in ES 2.0), luminance and alpha image formats for texturing and pixel transfer operations but they are not supported by renderbuffers. This means that while you can draw using these textures, you cannot draw into them by attaching them to an FBO. To draw into a texture, it must have an image format that is both a renderbuffer format and texture format.
On a historical note, there are renderbuffer formats that are/were not usable as textures. Multisample formats are one example, multisample texturing was added after the initial FBO specification was created.
I noticed that regardless of the shape (aspect ratio) of a texture, it will always draw as a perfect square, scaling unequally, when using it as a point sprite. I assume this is because points are, after all, circular.
If you wish to use point sprites on rectangular textures, is this possible using the point sprite mechanism, or would I need to just build quads with textures instead?
Or perhaps there is something that can be added to a shader to recognize and work with a rectangular texture? Currently mine are quite simple:
Vertex shader:
TextureCoordOut = TextureCoordinate;
gl_PointSize = 15.0;
Fragment:
gl_FragColor = texture2D(Sampler, isSprite? gl_PointCoord: TextureCoordOut) * DestinationColor;
Points have only one size, which will be equally applied to the width and height..
I am using OpenGL ES 1.1 in iOS 5.0 , and I want to draw a sphere with a texture mapped.
The texture will be a map of the world, which is a .png with an alpha channel.
I want that to see the other part of the globe by the inside.
However, I obtain this strange effect and I don't know why this is happening.
I'm exporting from Blender using this script: https://github.com/jlamarche/iOS-OpenGLES-Stuff/tree/master/Blender%20Export/objc_blend_2.62
I've already tried to reverse the orientation of the normals but it didn't help.
I don't want to activate culling because I want to see both faces.
http://imageshack.us/photo/my-images/819/screenshot20121207at308.png/