How to draw a mesh with multiple textures - objective-c

I'm following the answer from this thread: Binding Multiple Textures to One Mesh in OpenGL ES 1.1
My implementation doesn't seem to be working and I don't know why.
Here are the facts of my code:
textureArray is an NSMutableArray populated by GLKTextureInfo objects
groupMesh is an array of a struct that contains:
a pointer to the place in the index array that we want to get indices from.
the size of the index data
-
I have one element array buffer for my vertices and one for my indices
I decided to make a for loop. In each iteration I bind a different texture from the GLKTextureInfo array, and I change the pointer to the area of memory of the index data I want to draw with the texture that I just bound.
-
for (int i = 0; i<mesh->numMeshes-1; i++)
{
glBindTexture(GL_TEXTURE_2D,
[(GLKTextureInfo *)[textureArray objectAtIndex:i] name]);
glDrawElements(GL_TRIANGLES,
mesh->groupMesh[i].indexDataSize*4,
GL_UNSIGNED_INT,
mesh->groupMesh[i].indexPointer);
}
The first texture in the array is a tree bark texture, the second texture is tree leaves.
The textures aren't binding after the first iteration however. Which is giving this kind of result:
http://img69.imageshack.us/img69/5138/tbko.png
I forced the loop to test if my theory was correct and changed objectAtIndex:i to objectAtIndex:1, and the leaf texture appeared all over:
http://img266.imageshack.us/img266/5598/c05n.png
So it just seems to be glBindTexture that isn't working, is it because opengl is already in the draw state? Is there a way around this?
Note:(I asked a similar question yesterday, but now I've done a bit more research and still I don't know what I'm doing wrong).

The more I think about it, your index data may in fact be to blame here.
First, GL_UNSIGNED_INT is a terrible choice for vertex array element index. You rarely need 4.2 billion vertices, GL_UNSIGNED_SHORT (up to 65536 vertices) is the preferred index type - especially on embedded devices. GL_UNSIGNED_BYTE may be tempting for meshes with fewer than 256 vertices, but most hardware cannot natively support 8-bit indices so you just put more work on the driver.
Now onto what might actually be causing this problem:
You are using mesh>groupMesh[i].indexDataSize*4 for the number of vertices to draw. This will overrun your index array and indexDataSize*3-many vertices will be invalid. As weird as it sounds, since 3/4 of your drawn vertices invoke undefined behavior, this could be the cause of your texturing issues.

Related

Vertex buffer with vertices of different formats

I want to draw a model that's composed of multiple meshes, where each mesh has different vertex formats. Is it possible to put all the various vertices within the same vertex buffer, and to point to the correct offset at vkCmdBindVertexBuffers time?
Or must all vertices within a buffer have the same format, thus necessitating multiple vbufs for such a model?
Looking at the manual for vkCmdBindVertexBuffers, it's not clear whether the offset is in bytes or in vertex-strides.
https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkCmdBindVertexBuffers.html
Your question really breaks down into 3 questions
Does the pOffsets parameter for vkCmdBindVertexBuffers accept bytes or vertex strides?
Can I put more than one vertex format into a vertex buffer?
Should I put more than one vertex format into a vertex buffer?
The short version is
Bytes
Yes
Probably not
Does the offsets parameter for vkCmdBindVertexBuffers accept bytes or vertex strides?
The function signature is
void vkCmdBindVertexBuffers(
VkCommandBuffer commandBuffer,
uint32_t firstBinding,
uint32_t bindingCount,
const VkBuffer* pBuffers,
const VkDeviceSize* pOffsets);
Note the VkDeviceSize type for pOffsets. This unambiguously means "bytes", not strides. Any VkDeviceSize means an offset or size in raw memory. Vertex Strides aren't raw memory, they're simply a count, so the type would have to be a uint32_t or uint64_t.
Furthermore there's nothing in that function signature that specifies the vertex format so there would be no way to convert the vertex stride count to actual memory sizes. Remember that unlike OpenGL, Vulkan is not a state machine, so this function doesn't have any "memory" of a rendering pipeline that you might have previously bound.
Can I put more than one vertex format into a vertex buffer?
As a consequence of the above answer, yes. You can put pretty much whatever you want into a vertex buffer, although I believe some hardware will have alignment restrictions on what are valid offsets for vertex buffers, so make sure you check that.
Should I put more than one vertex format into a vertex buffer?
Generally speaking you want to render your scene in as few draw calls as possible, and having lots of arbitrary vertex formats runs counter to that. I would argue that if possible, the only time you want to change vertex formats is when you're switching to a different rendering pass, such as when switching between rendering opaque items to rendering transparent ones.
Instead you should try to make format normalization part of your asset pipeline, taking your source assets and converting them to a single consistent format. If that's not possible, then you could consider doing the normalization at load time. This adds complexity to the loading code, but should drastically reduce the complexity of the rendering code, since you now only have to think in terms of a single vertex format.

OpenGL ES 2.0 - Copy Texture Data

I have an array of 2D textures initialized in OpenGL ES 2.0:
textures[10];
After I delete one of the textures at a given array index:
glDeleteTextures(1, &textures[5]);
How do I remove the empty gap left in my array, with relative ease, in order to keep things neat and tidy? Is there a more direct method other than rendering each texture and then using glGetTexImage as a way to change the order of the textures in the array?
The textures array is actually an array of 'names' represented by non-zero integers set by glGenTextures that represent texture locations on the GPU. As long as you keep track of the valid textures names and what your using them for you can sort the array any way you want.

How do you reliably (u,v) index a texture as a 2d array of vectors?

Using shader model 5/D3D11/HLSL.
I'd like to treat a 2D array of texels as a 2D matrix of Vectors.
u
v (1,4,3,9) (7, 5.5, 4.9, 2.1)
(Each texel is a 4-component vector). I need to access specific ranges of the data in the texture, for different shaders. So, the ranges to access in the texture naturally should be indexed as u,v components.
How would I do that in HLSL? I'm thinking the following:
Create the texture as per normal
Load your vector values into the texture (1 vector per texel)
Turn off all linear interpolation for texture sampling ("nearest neighbour")
In the shader, look up vectors you need using texture coordinates
The only thing I feel is shaky is whether there will be strange errors introduced when I index the texture using floating point u's and v's.
If the texture is 1024x1024 texels, and I'm trying to index (3,2)->(3,7), that would be u=(3/1024,2/1024)->(3/1024,7/1024) which feels a bit shaky. Is there a way to index the texture by int components, perhaps? Or will it just work out fine?
Texture2DArray
Not desiring to use a GPGPU framework just for this (so no CUDA suggestions pls :).
You can do it using operator[] in hlsl 5.0
See here

Seam Carving – Accessing pixel data in cocoa

I want to implement the seam carving algorithm by Avidan/Shamir. After the energy computing stage which can be implemented using a core image filter, I need to compute the seams with the lowest energy which can't be implemented as a core image filter for it uses dynamic programming (and you don't have access to previous computations in opengl shading language).
So i need a way to access the pixel data of an image efficiently in objective-c cocoa.
Pseudo code omitting boundary checks:
for y in 0..lines(image) do:
for x in 0..columns(image) do:
output[x][y] = value(image, x, y) +
min{ output[x-1][y-1]; output[x][y-1]; output[x+1][y-1] }
The best way to get access to the pixel values for an image, is to create a CGBitmapContextRef with CGBitmapContextCreate. The important part about this is that when you create the context, you get to pass the pointer in that will be used as the backing store for the bitmap's data. Meaning that data will hold the pixel values and you can do what ever you want with them.
So the steps should be:
Allocate a buffer with malloc or another suitable allocator.
Pass that buffer as the first parameter to CGBitmapContextCreate.
Draw your image into the returned CGBitmapContextRef.
Release the context.
Now you have your original data pointer that is filled with pixels in the format specified in the call to CGBitmapContextCreate.

How do you use glDrawElements with GL_UNSIGNED_INT for the indices?

I'm trying to draw 3d objects that have more than 65536 vertices on the iPad, but can't figure out what I'm doing wrong. My original model that used GL_UNSIGNED_SHORT worked just fine, but now with GL_UNSIGNED_INT, I can't get anything to show up using the glDrawElements command. It's like the renderer is ignoring my glDrawElements line completely. The portion of my rendering loop that I'm referencing is below:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(assemblyObj->vertices[0])*6, &assemblyObj->vertices[0]);
glNormalPointer(GL_FLOAT, sizeof(assemblyObj->vertices[0])*6, &assemblyObj->vertices[0]);
for (int i = 0; i < assemblyObj->numObjects; i++)
{
glDrawElements(GL_TRIANGLES, assemblyObj->partList[i].faceArray.size(), GL_UNSIGNED_INT, &assemblyObj->partList[i].faceArray[0]);
}
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
vertices is defined as:
vector<float> vertices;
and each faceArray is defined as:
vector<UInt32> faceArray;
Any suggestions on what I'm doing wrong that is preventing my geometry from drawing?
Stock OpenGL ES does not support GL_UNSIGNED_INT for indices.
From the GLES glDrawElements man page:
GL_INVALID_ENUM is generated if type is not GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT.
This restriction is relaxed when GL_OES_element_index_uint is supported.
If you don't have support on the target platform, your best bet is to munge your mesh in multiple sub-meshes with < 64K indices for each.
As to ipad specifically, as far as I know, iOS does not support this extension (See Supported extensions), but you can verify the extension list on the actual device if you want to make sure.