OpenGL ES 2.0 texture clamping - objective-c

i'm trying to get my texture to be tiled when texture-coordinates go beyond 1.
I have tried this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT),
How ever, when settings these two lines, all I see is black color, no texture at all!
This works, but doesn't give the repeating effect, which i need:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Help! I've used already couple of hours to investigate with no results!

Setting GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_REPEAT requires your texture dimensions to be powers of two.

You can convert texture coordinate values greater than one to values in the range 0...1 by discarding the fractional part. Here's some code you can put in your fragment shader, assuming that the texture coordinates are in texture_coord:
texture_coord.x = mod(texture_coord.x,1.0);
texture_coord.y = mod(texture_coord.y,1.0);
gl_FragColor = texture2D(s_texture,texture_coord);
I have tested this in OpenGL ES 2.0 and it works as expected, allowing you to use textures of any size, not just powers of two.

Related

opengl texture mapping off by 5-8 pixels

I've got a bunch of thumbnails/icons packed right up next to each other in a texture map / sprite sheet. From a pixel to pixel relationship, these are being scaled up from being 145 pixels square to 238 screen pixels square. I was expecting to get +-1 or 2 pixel accuracy on the edges of the box when accessing the texture coordinates, so I'm also drawing a 4 pixel outline overtop of the thumbnail to hide this probable artifact. But I'm seeing huge variations in accuracy. Sometimes it's off in one direction, sometimes the other.
I've checked over the math and I can't figure out what's happening.
The the thumbnail is being scaled up about 1.64 times. So a single pixel off in the source texture coordinate could result in around 2 pixels off on the screen. The 4 pixel white frame over top is being drawn at a 1-1 pixel to fragment relationship and is supposed to cover about 2 pixels on either side of the edge of the box. That part is working. Here I've turned off the border to show how far off the texture coordinates are....
I can tweak the numbers manually to make it go away. But I have to shrink the texture coordinate width/height by several source pixels and in some cases add (or subtract) 5 or 6 pixels to the starting point. I really just want the math to work out or to figure out what I'm doing wrong here. This sort of stuff drives me nuts!
A bunch of crap to know.
The shader is doing the texture coordinate offsetting in the vertex shader...
v_fragmentTexCoord0 = vec2((a_vertexTexCoord0.x * u_texScale) + u_texOffset.s, (a_vertexTexCoord0.y * u_texScale) + u_texOffset.t);
gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);
This object is a box which is a triangle strip with 2 tris.
Not that it should matter, but matrix applied to the model isn't doing any scaling. The box is to screen scale. The scaling is happening only in the texture coordinates that are being supplied.
The texture coordinates of the object as seen above are 0.00 - 0.07, then in the shader have an addition of an offset amount which is different per thumbnail. .07 out of 2048 is like 143. Originally I had it at .0708 which should be closer to 145 it was worse and showed more like 148 pixels from the texture. To get it to only show 145 source pixels I have to make it .0.06835 which is 140 pixels.
I've tried doing the math in a calculator and typing in the numbers directly. I've also tried doing like =1305/2048. These are going in to GLfloats not doubles.
This texture map image is PNG and is loaded with these settings:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
but I've also tried GL_LINEAR with no apparent difference.
I'm not having any accuracy problems on other textures (in the same texture map) where I'm not doing the texture scaling.
It doesn't get farther off as the coords get higher. In the image above the NEG MAP thumb is right next to the HEAT MAP thumb and are off in different directions but correct at the seam.
here's the offset data for those two..
filterTypes[FT_gradientMap20].thumbTexOffsetS = 0.63720703125;
filterTypes[FT_gradientMap20].thumbTexOffsetT = 0.1416015625;
filterTypes[FT_gradientMap21].thumbTexOffsetS = 0.7080078125;
filterTypes[FT_gradientMap21].thumbTexOffsetT = 0.1416015625;
==== UPDATE ====
A couple of things off the bat I realized I was doing wrong and are discussed over here: OpenGL Texture Coordinates in Pixel Space
The width of a single thumbnail is 145. But that would be 0-144, with 145 starting the next one. I was using a width of 145 so that's going to be 1 pixel too big. Using the above center of pixel type math, we should actually go from the center of 0 to the center of 144. 144.5 - 0.5 = 144.
Using his formula of (2i + 1)/(2N) I made new offset amounts for each of the starting points and used the 144/2048 as the width. That made things better but still off in some areas. And again still off in one direction sometimes and the other other times. Although consistent for each x or y position.
Using a width of 143 proves better results. But I can fix them all by just adjusting the numbers manually to work. I want to have the math to make it work out right.
... or.. maybe it has something to do with min/mag filtering - although I read up on that and what I'm doing seems right for this case.
After a lot of experiments and having to create a grid-lined guide texture so I could see exactly how far off each texture was... I finally got it!
It's pretty simple actually.
uniform mat4 u_modelViewProjectionMatrix;
uniform mediump vec2 u_texOffset;
uniform mediump float u_texScale;
attribute vec3 a_vertexPosition;
attribute mediump vec2 a_vertexTexCoord0;
The precision of the texture coordinates. By specifying mediump it just fixed itself. I suspect this also would help solve the problem I was having in this question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
Once I did that, I had to go back to my original 145 width (which still seems wrong but oh well). And for what it's worth I ended up then going back to all my original math on all the texture coordinates. The "center of pixel" method was showing more of the neighboring pixels than the straight /2048 did.

Trouble getting texture pixels Not to repeat at the edge

Not familiar with OpenGL, and cannot seem to figure out this small detail. I keep finding information on repeats of a texture and not border elements.
When zooming in on a texture, the edge pixels on the end are repeated to the edge of the surface view that I am working on. I would rather see a clear background behind those pixels when the texture is smaller than the view.
I can assume that it has to do with setting attributes to my texture around these lines of the code, but I just can't follow it enough to understand what changes to make to get the desired result.
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Aide on the (most likely quick modification) topic for getting that desired result would put my current issue to rest.
Ok, I'll try to explain.
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The GL_TEXTURE_MIN_FILTER is the filter applied when a texture's bitmap dimension in larger than the drawing surface's corresponding dimension. GL_TEXTURE_MAG_FILTER is the opposite. So this is not the problem.
GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T however, define what to do when a texture coordinate falls outside of the 0.0 to 1.0 range (For both axis). You have them both set to GL_CLAMP_TO_EDGE, which basicly means that if a texture coordinate < 0 then use 0, if > 1 use 1. This is basicly what you see happening in your picture.
However, this can only be changed to GL_REPEAT which repeats the texture or GL_MIRRORED_REPEAT which repeats and mirrors. So your problem can not be fixed by changing these settings. There is no discard-if-out-of-range setting, which I think you need.
You can read more about this in the documentation if you please: http://www.khronos.org/opengles/sdk/docs/man/
I don't know exactly how you zoom, but it usually doesn't involve changing texture coordinates as far as I know. If you want to keep it this way, you could try a little hack though - in the the fragment shader you could add before setting the gl_FragColor:
if (tc.x < 0.0 || tc.x > 1.0 || tc.y < 0.0 || tc.y > 1.0 )
discard;
Where tc is your texture coordinate. So this would discard the fragment if the texture coordinate is out of the 0 to 1 range.
It's not a pretty solution, but it should do the trick.
Brianberg's approach would work, but scaling the quad you're rendering the image to rather than modifying texture coordinates would be a cleaner & better performing solution. There's no need to change the texture coordinates, as you always want the image to fit the quad exactly.

Applying a scale and translate transformation to UIBezierPath

I have a UIBezierPath and I would like to:
Move to any coordinate on the UIView
Make bigger or smaller
I am drawing the UIBezierPath based off of a list of predefined coordinates. I implemented this code:
CGAffineTransform move = CGAffineTransformMakeTranslation(0, 0);
CGAffineTransform moveAndScale = CGAffineTransformScale(move, 1.0f, 1.0f);
[shape applyTransform:moveAndScale];
I have also tried scaling and then moving the shape, it seems to make little to no difference.
Using this code:
[shape moveToPoint:CGPointMake(0, 0)];
I start drawing the shape at (0, 0), but this is what happens. I assume this is because a line is being drawn from 0, 0 to the next point in the list.
When I set the move transformation to (0, 0) this is where it draws. Here, moveToPoint is set to the first coordinate pair in the list. As you can see, it is not at 0, 0.
Finally, increasing the 1.0f moves the shape off the screen completely, no matter where the I tell the shape to move.
Can someone help me understand why the shape is not drawing at 0, 0 and why it moves off the screen when I scale it.
(As requested by the OP in a comment above)
I might be wrong on this one, but doesn't this code
CGAffineTransformMakeTranslation(0, 0);
just say that something should be moved 0 pixels along the x-axis and 0 pixels along the y-axis? (reference) It won't actually move anything to origo (0, 0), as it seems you are trying to do.
Also, it seems like you have slightly misunderstood how to properly use moveToPoint:.. Think of it as a way to move your cursor, but without actually drawing anything. It is just a way to say 'I want to start drawing at this point'. The drawing itself can be performed by other methods. If you wanted to e.g. draw a square with sides of length L, then you could do something like this..
// 'shape' is a UIBezierPath
NSInteger L = 100;
CGPoint origin = CGPointMake(50, 50);
[shape moveToPoint:origin]; // Initial point to draw from
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y)]; // Draw from origin to the right
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y+L)]; // Draw a vertical line
[shape addLineToPoint:CGPointMake(origin.x, origin.y+L)]; // Draw bottom line
[shape addLineToPoint:origin]; // Draw vertical line back to origin
Note that this code is not tested at all, but it should give you the idea of how to use moveToPoint: and addLineToPoint:.
You need to be careful of the order you apply the transforms in and you should think about concatenating the transforms together and applying them in one go.
The order is important as each transform affects all x,y positions in the path. So, the translation is affected by the scale. Reverse the order and the path will be scaled and then moved.
Also, the coordinate system is important, particularly if you are scaling. Ensure you draw around 0,0 and then scale and then translate. This is easiest if you normalise the points. Normalising for lat/long values means dividing latitude by 90 and longitude by 180 (this will actually give you a range -1..1). When doing this you should first scale the path, then translate it to the centre of the view, then apply your desired translation.

OpenGL ES - How to Batch Render 500+ particles w/ different alphas, rotations, and scales?

I am developing an iOS game that will need to render 500-800 particles at a time. I have learned that it is a good idea to batch render many sprites in OpenGL ES instead of calling glDrawArrays(..) on every sprite in the game in order to be able to render more sprites w/out a drastic reduction in frame rate.
My question is: how do I batch render 500+ particles that all have different alphas, rotations, and scales, but share the same texture atlas? The emphasis of this question is on the different alphas, rotations, and scales for each particle.
I realize this question is very similar to How do I draw 1000+ particles (w/ unique rotation, scale, and alpha) in iPhone OpenGL ES particle system without slowing down the game?, however, that question does not address batch rendering. Before I take advantage of vertex buffer objects, I want to understand batch rendering in OpenGL ES w/ unique alphas, rotations, and scales (but with the same texture). Therefore, while I plan on using VBOs eventually, I want to take this approach first.
Code examples would greatly be appreciated, and if you use an indices array as some examples do, please explain the structure and purpose of the indices array.
EDIT I am using OpenGL ES 1.1.
EDIT Below is a code example of how I render each particle in the scene. Assume that they share the same texture and that texture is already bound in OpenGL ES 1.1 before this code executes.
- (void) render {
glPushMatrix();
glTranslatef(translation.x, translation.y, translation.z);
glRotatef(rotation.x, 1, 0, 0);
glRotatef(rotation.y, 0, 1, 0);
glRotatef(rotation.z, 0, 0, 1);
glScalef(scale.x, scale.y, scale.z);
// change alpha
glColor4f(1.0, 1.0, 1.0, alpha);
// glBindTexture(GL_TEXTURE_2D, texture[0]);
glVertexPointer(2, GL_FLOAT, 0, texturedQuad.vertices);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texturedQuad.textureCoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
}
A code alternative to this method would be greatly appreciated!
One possibility would be to include those values in a vertex attrib array - I think this is the best option. If you're using OpenGL ES 1.1 instead of 2.0 you're screwed out of this method. Vertex attrib arrays allow you to store values at each vertex in this case you could store the alphas and rotations each in their own attrib array and pass them to the shader with glVertexAttribArray. The shader would then do the rotation transformation and color processing with the alpha.
The other option would be to do rotation transformation on the CPU, and then batch particles with a similar alpha values into several draw calls. This version would require a little bit more work and it would not be a single draw call but it would still help to optimize if the shader is not an option.
NOTE: The question you linked to also recommends the array solution
EDIT: Given your code here is an OpenGL ES 1.0, here's a solution using glColorPointer:
// allocate buffers to store an array of all particle data
verticesBuffer = ...
texCoordBuffer = ...
colorBuffer = ...
for (particle in allParticles)
{
// Create matrix from rotation
rotMatrix = matrix(particle.rotation.x, particle.rotation.y, particle.rotation.z)
// Transform particle by matrix
verticesBuffer[i] = particle.vertices * rotMatrix
// copy other data
texCoordBuffer[i] = particle.texCoords;
colorBuffer[i] = color(1.0, 1.0, 1.0, particle.alpha);
}
glVertexPointer(verticesBuffer, ...)
glTexCoordPointer(texCoodBuffer, ...)
glColorPointer(colorBuffer, ...)
glDrawArrays(particleCount * 4, ...);
A good optimization for this solution would be to share the buffers for each render so you don't have to reallocate them every frame.

OpenGL multi texture masking

I fallowed #Stefan Monov tutorial in this question. And everything works, but I need to make it working with my brush. I need to do so:
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
would be not a static texture, but dynamic shape. I mean, i'm trying to implement brush witch is doing such a thing that is written by #Stefan Monov. So I need that this place could be implemented in other - (void) function, so that it could be called, when coordinates changes (when user draws). I tried a variety of ways to change the sequence of your code, but it does not work correctly then. Now my bursh code is:
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
glPointSize(100);
glBegin(GL_POINTS);
glColorMask(1.0,1.0,1.0, 1.0);
glVertex2f(loc.x, loc.y);
glEnd();
glDisable(GL_BLEND);
It is called when mouse is dragged. "loc" is dragged mouse coordinates. Afcourse its not working now at all, because of blendFunc and code's sequence. When I leave sequence as #Stefan Monov described, it works, but it draws one point and drags it when mouse is dragged. Becouse after drawing point, other textures is being redrawed too. Any, at least similar, solution for it?
To make it more clear i'll show how I want my APP to work.
Here is the original code:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ZERO);
drawQuad(backgroundTexture);
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
drawQuad(maskTexture);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
Now its working like this:
Draws background
Draws mask
Draws foreground
I need it to work like this:
Draws bacground
Draws foreground
Draws mask
But if i change the order of drawings it stops working. Ignores mask.
Expected result foto:
Your attempt to draw the mask last makes no sense really.
Drawing the mask only modifies what's in the alpha channel, and the alpha channel itself is not visible in any way in the final image.
The only use of the mask is to modify what's drawn after it.