OpenGL ES 2.0 - Copy Texture Data - objective-c

I have an array of 2D textures initialized in OpenGL ES 2.0:
textures[10];
After I delete one of the textures at a given array index:
glDeleteTextures(1, &textures[5]);
How do I remove the empty gap left in my array, with relative ease, in order to keep things neat and tidy? Is there a more direct method other than rendering each texture and then using glGetTexImage as a way to change the order of the textures in the array?

The textures array is actually an array of 'names' represented by non-zero integers set by glGenTextures that represent texture locations on the GPU. As long as you keep track of the valid textures names and what your using them for you can sort the array any way you want.

Related

Why isn't there a 3D array image in Vulkan?

In the Vulkan API it was seen as valuable to include a VK_IMAGE_VIEW_TYPE_CUBE_ARRAY, but not a 3D array:
typedef enum VkImageViewType {
VK_IMAGE_VIEW_TYPE_1D = 0,
VK_IMAGE_VIEW_TYPE_2D = 1,
VK_IMAGE_VIEW_TYPE_3D = 2,
VK_IMAGE_VIEW_TYPE_CUBE = 3,
VK_IMAGE_VIEW_TYPE_1D_ARRAY = 4,
VK_IMAGE_VIEW_TYPE_2D_ARRAY = 5,
VK_IMAGE_VIEW_TYPE_CUBE_ARRAY = 6,
} VkImageViewType;
Each 6 layers of view for a cube array is another cube. I'm actually struggling to think of a use case for a cube array, and I don't really think it would be useful for 3D array, but why does the cube get an array type and not the 3D image. How is this cube array even supposed to be used? Is there even a cube array sampler?
Cube maps, cube map arrays, and 2D array textures are, in terms of the bits and bytes of storage, ultimately the same thing. All of these views are created from the same kind of image. You have to specify if you need a layered 2D image to be usable as an array or a cubemap (or both), but conceptually, they're all just the same thing.
Each mipmap level consists of L images of a size WxH, where W and H shrink based on the original size and the current mipmap level. L is the number of layers specified at image creation time, and it does not change with the mipmap level. Put simply, there are a constant number of 2D images per mipmap level. Cubemaps and cubemap arrays require L to be either 6 or a multiple of 6 respectively, but it's still constant.
A 3D image is not that. Each mipmap level consists of a single image of size WxHxD, where W, H, and D shrink based on the original size and current mipmap level. Even if you think of a mipmap level of a 3D image as being D number of WxH images, the number D is not constant between mipmap levels.
These are not the same things.
To have a 3D array image, you would need to have each mipmap level contain L 3D images of size WxHxD, where L is the same for each mipmap level.
As for the utility of a cubemap array, it's the same utility you would get out of a 2D array compared to a single 2D image. You use array textures when you need to specify one of a number of images to sample. It's just that in one case, each image is a single 2D image, while in another case, each image is a cubemap.
For a more specific example, many advanced forms of shadow mapping require the use of multiple shadow maps, selected at runtime. That's a good use case for an array texture. You can apply these techniques to point lights through the use of cube maps, but now you need to have the individual images in the array be cube maps, not just single 2D images.

How to draw a mesh with multiple textures

I'm following the answer from this thread: Binding Multiple Textures to One Mesh in OpenGL ES 1.1
My implementation doesn't seem to be working and I don't know why.
Here are the facts of my code:
textureArray is an NSMutableArray populated by GLKTextureInfo objects
groupMesh is an array of a struct that contains:
a pointer to the place in the index array that we want to get indices from.
the size of the index data
-
I have one element array buffer for my vertices and one for my indices
I decided to make a for loop. In each iteration I bind a different texture from the GLKTextureInfo array, and I change the pointer to the area of memory of the index data I want to draw with the texture that I just bound.
-
for (int i = 0; i<mesh->numMeshes-1; i++)
{
glBindTexture(GL_TEXTURE_2D,
[(GLKTextureInfo *)[textureArray objectAtIndex:i] name]);
glDrawElements(GL_TRIANGLES,
mesh->groupMesh[i].indexDataSize*4,
GL_UNSIGNED_INT,
mesh->groupMesh[i].indexPointer);
}
The first texture in the array is a tree bark texture, the second texture is tree leaves.
The textures aren't binding after the first iteration however. Which is giving this kind of result:
http://img69.imageshack.us/img69/5138/tbko.png
I forced the loop to test if my theory was correct and changed objectAtIndex:i to objectAtIndex:1, and the leaf texture appeared all over:
http://img266.imageshack.us/img266/5598/c05n.png
So it just seems to be glBindTexture that isn't working, is it because opengl is already in the draw state? Is there a way around this?
Note:(I asked a similar question yesterday, but now I've done a bit more research and still I don't know what I'm doing wrong).
The more I think about it, your index data may in fact be to blame here.
First, GL_UNSIGNED_INT is a terrible choice for vertex array element index. You rarely need 4.2 billion vertices, GL_UNSIGNED_SHORT (up to 65536 vertices) is the preferred index type - especially on embedded devices. GL_UNSIGNED_BYTE may be tempting for meshes with fewer than 256 vertices, but most hardware cannot natively support 8-bit indices so you just put more work on the driver.
Now onto what might actually be causing this problem:
You are using mesh>groupMesh[i].indexDataSize*4 for the number of vertices to draw. This will overrun your index array and indexDataSize*3-many vertices will be invalid. As weird as it sounds, since 3/4 of your drawn vertices invoke undefined behavior, this could be the cause of your texturing issues.

Reconstruct surface from 3D triangular meshes

I have a 3D model, which consists of the 3D triangular meshes. I want to partition the meshes into different groups. Each group represents a surface, such as a planar face, cylindrical surface. This is something like surface recognition/reconstruction.
The input is a set of 3D triangular meshes. The output is the mesh segmentations per surface.
Is there any library meets my requirement?
If you want to go into lots of mesh processing, then the point cloud library is a good idea, but I'd also suggest CGAL: http://www.cgal.org for more algorithms and loads of structures aimed at meshes.
Lastly, the problem you describe is most easily solved on your own:
enumerate all vertices
enumerate all polygons
create an array of ints with the size of the number of vertices in your "big" mesh, initialize with 0.
create an array of ints with the size of the number of polygons in your "big" mesh, initialize with 0.
initialize a counter to 0
for each polygon in your mesh, look at its vertices and the value that each has in the array.
if the values for each vertex are zero, increase counter and assign to each of the values in the vertex array and polygon array correspondingly.
if not, relabel all vertices and polygons with a higher number to the smallest, non-zero number.
The relabeling can be done quickly with a look up table.
This might save you lots of issues interfacing your code to some library you're not really interested in.
You should have a look at the PCL library, it has all these features and much more: http://pointclouds.org/

How do you reliably (u,v) index a texture as a 2d array of vectors?

Using shader model 5/D3D11/HLSL.
I'd like to treat a 2D array of texels as a 2D matrix of Vectors.
u
v (1,4,3,9) (7, 5.5, 4.9, 2.1)
(Each texel is a 4-component vector). I need to access specific ranges of the data in the texture, for different shaders. So, the ranges to access in the texture naturally should be indexed as u,v components.
How would I do that in HLSL? I'm thinking the following:
Create the texture as per normal
Load your vector values into the texture (1 vector per texel)
Turn off all linear interpolation for texture sampling ("nearest neighbour")
In the shader, look up vectors you need using texture coordinates
The only thing I feel is shaky is whether there will be strange errors introduced when I index the texture using floating point u's and v's.
If the texture is 1024x1024 texels, and I'm trying to index (3,2)->(3,7), that would be u=(3/1024,2/1024)->(3/1024,7/1024) which feels a bit shaky. Is there a way to index the texture by int components, perhaps? Or will it just work out fine?
Texture2DArray
Not desiring to use a GPGPU framework just for this (so no CUDA suggestions pls :).
You can do it using operator[] in hlsl 5.0
See here

Assign mirror operation to vertices array

I understand the math in flipping vertex coordinates in a .obj vertices array to get the mirrored coordinate across a plane/axis. But, how do you populate the vertices array for an actual mirror operation (as opposed to just a flip)
Normally you don't mirror by flipping vertex values, but by appliying an apropriate mirror transform matrix.