opengl texture mapping off by 5-8 pixels - opengl-es-2.0

I've got a bunch of thumbnails/icons packed right up next to each other in a texture map / sprite sheet. From a pixel to pixel relationship, these are being scaled up from being 145 pixels square to 238 screen pixels square. I was expecting to get +-1 or 2 pixel accuracy on the edges of the box when accessing the texture coordinates, so I'm also drawing a 4 pixel outline overtop of the thumbnail to hide this probable artifact. But I'm seeing huge variations in accuracy. Sometimes it's off in one direction, sometimes the other.
I've checked over the math and I can't figure out what's happening.
The the thumbnail is being scaled up about 1.64 times. So a single pixel off in the source texture coordinate could result in around 2 pixels off on the screen. The 4 pixel white frame over top is being drawn at a 1-1 pixel to fragment relationship and is supposed to cover about 2 pixels on either side of the edge of the box. That part is working. Here I've turned off the border to show how far off the texture coordinates are....
I can tweak the numbers manually to make it go away. But I have to shrink the texture coordinate width/height by several source pixels and in some cases add (or subtract) 5 or 6 pixels to the starting point. I really just want the math to work out or to figure out what I'm doing wrong here. This sort of stuff drives me nuts!
A bunch of crap to know.
The shader is doing the texture coordinate offsetting in the vertex shader...
v_fragmentTexCoord0 = vec2((a_vertexTexCoord0.x * u_texScale) + u_texOffset.s, (a_vertexTexCoord0.y * u_texScale) + u_texOffset.t);
gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);
This object is a box which is a triangle strip with 2 tris.
Not that it should matter, but matrix applied to the model isn't doing any scaling. The box is to screen scale. The scaling is happening only in the texture coordinates that are being supplied.
The texture coordinates of the object as seen above are 0.00 - 0.07, then in the shader have an addition of an offset amount which is different per thumbnail. .07 out of 2048 is like 143. Originally I had it at .0708 which should be closer to 145 it was worse and showed more like 148 pixels from the texture. To get it to only show 145 source pixels I have to make it .0.06835 which is 140 pixels.
I've tried doing the math in a calculator and typing in the numbers directly. I've also tried doing like =1305/2048. These are going in to GLfloats not doubles.
This texture map image is PNG and is loaded with these settings:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
but I've also tried GL_LINEAR with no apparent difference.
I'm not having any accuracy problems on other textures (in the same texture map) where I'm not doing the texture scaling.
It doesn't get farther off as the coords get higher. In the image above the NEG MAP thumb is right next to the HEAT MAP thumb and are off in different directions but correct at the seam.
here's the offset data for those two..
filterTypes[FT_gradientMap20].thumbTexOffsetS = 0.63720703125;
filterTypes[FT_gradientMap20].thumbTexOffsetT = 0.1416015625;
filterTypes[FT_gradientMap21].thumbTexOffsetS = 0.7080078125;
filterTypes[FT_gradientMap21].thumbTexOffsetT = 0.1416015625;
==== UPDATE ====
A couple of things off the bat I realized I was doing wrong and are discussed over here: OpenGL Texture Coordinates in Pixel Space
The width of a single thumbnail is 145. But that would be 0-144, with 145 starting the next one. I was using a width of 145 so that's going to be 1 pixel too big. Using the above center of pixel type math, we should actually go from the center of 0 to the center of 144. 144.5 - 0.5 = 144.
Using his formula of (2i + 1)/(2N) I made new offset amounts for each of the starting points and used the 144/2048 as the width. That made things better but still off in some areas. And again still off in one direction sometimes and the other other times. Although consistent for each x or y position.
Using a width of 143 proves better results. But I can fix them all by just adjusting the numbers manually to work. I want to have the math to make it work out right.
... or.. maybe it has something to do with min/mag filtering - although I read up on that and what I'm doing seems right for this case.

After a lot of experiments and having to create a grid-lined guide texture so I could see exactly how far off each texture was... I finally got it!
It's pretty simple actually.
uniform mat4 u_modelViewProjectionMatrix;
uniform mediump vec2 u_texOffset;
uniform mediump float u_texScale;
attribute vec3 a_vertexPosition;
attribute mediump vec2 a_vertexTexCoord0;
The precision of the texture coordinates. By specifying mediump it just fixed itself. I suspect this also would help solve the problem I was having in this question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
Once I did that, I had to go back to my original 145 width (which still seems wrong but oh well). And for what it's worth I ended up then going back to all my original math on all the texture coordinates. The "center of pixel" method was showing more of the neighboring pixels than the straight /2048 did.

Related

Fragment shader not lerping textures correctly

Im trying to blend two textures (sand and grass [ignore the grass straws]) in my game based on the height of the points. I have somewhat succeeded, but the result is a little bit odd.
In my frag function:
return lerp(tex2D(_SandTex, input.uv), tex2D(_GrassTex, input.uv), InverseLerp(_SandStart, _GrassStart, input.positionWS.y)) * mainLight.shadowAttenuation;
As you can see, it seems like every second triangle is very different, while both 'sets' of triangles are blending fine down and up through the y axis, they should be differentiate a lot with their neighbours.
What am I missing here?

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

Detect if a quad is actually visible 2D in OpenGL

I currently have 16 tiles, with individual images that make up 1 big map. I pan by transforming right at the beginning before any actual drawing with this:
GL.Translate(G_.Pan(0), G_.Pan(1), 0)
Then I zoom by doing this:
GL.Ortho(-G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, -G_.Size * 1.5 ^ G_.ZoomFactor, -1, 1)
G_.Size is a constant that only varies on startup depending on parameters, zoom factor ranges from -1 to -13
What I want to be able to do is check if 1 of the 16 tiles is within the visible area, so then I stop them drawing when they are not on screen. I had found some quite complex methods for doing it, but it was 3D and seemed like a lot of work for something that should be simple. I would of thought it would of been something like just checking if a point is within the bounds of visible area, but I have no idea on how to get the visible area.
Andon M Coleman already suggested you to implement projection volume culling (a generalized form of frustum culling). This is however outside the scope of OpenGL. You must understand that OpenGL is not a "magical" scene graph that does scene management and the likes. It's mere drawing API; what it does is putting shaded, textured points, lines or triangles on the screen and that's it. The rest is up to you, or the libraries you choose to implement it.
In the case of projection volume culling you're testing if a given piece of geometry intersects with the volume defined by the planes that form the borders of the volume. Your projection matrix defines such planes, specifically it transform the view space vertex position volume into the range [-1;1]×[-1;1]×[0;1] of perspective divided clip space. So by inverting the projection matrix and unprojection the corners of the [-1;1]×[-1;1]×[0;1] cube through that you determine the limiting planes of the projection volume in view space.
You then use that information to intersect your quads with the volume to see if they cross it, i.e. are in any way visible.

OpenGL texture mapping with different coordinates systems

I already asked a question about texture mapping and these two are related (this question).
I'm working with Quartz Composer which appears to be kind specific with textures...
I have a complex polygon that I triangulate in a specific coordinate system (-1 -> 1 on x | -0.75 -> 0.75 on y). I obtain an array of triangles vertices in this coordinate system (triangles 1 to 6 on the left pic).
Then I render each polygon separately (it's necessary for my program), by applying a scale function on its vertices from this coordinate system to OpenGL one (0. -> 1.). Here, even if for 0->1 range it's kind of stupid :
return (((1. - 0.) * (**myVertexXorY** - minTriangleBound)) / (maxTriangleBound - minTriangleBound)) + 0.;
But I want one image to be textured on these triangles (like on the picture above). So I begin by getting the whole polygon bounds (1 on the right pic), then the triangle bounds (2 on the right pic). I scale 1 to the picture coordinates (3 on the right pic) in pixels, then I get the triangle bounds (2) in pixels.
It gives me the bounds to lock my texture in OpenGL with Quartz :
NSRect myBounds = NSMakeRect(originXinPixels, originYinPixels, widthForTheTriangle, heightForTheTriangle);
And I lock my texture
[myImage lockTextureRepresentationWithColorSpace:space forBounds:myBounds];
Then, with OpenGL :
for (int32 i = 0; i < vertexCount; ++i)
{
verts[i] = myTriangle.vertices[i];
texcoord[0] = [self myScaleFunctionFor:XinQuartzCoordinateSystem From:0 To:1]
texcoord[1] = [self myScaleFunctionFor:YinQuartzCoordinateSystem From:0 To:1]
glTexCoord2fv(texcoord);
}
And I obtain what you can see : sometimes parts of the image are fitting, sometimes no (well, in fact with this particular polygon, it doesn't fit at all...).
I'm not really sure if I did understand your question, but:
What hinders you from directly supplying texture coordinates that do match the topology of your source picture? This was far easier than trying to find some per triangle linear mapping that moves the picture in the right way.

Draw tiled images in CGContext with a scale transformation gives precision errors

I want to draw tiled images and then transform them by using the usual panning and zooming gestures. The problem that brings me here is that, whenever I have a scaling transformation of a large number of decimal places, a thin line of pixels (1 or 2) appears in the middle of the tiles. I managed to isolate the problem like this:
CGContextSaveGState(UIGraphicsGetCurrentContext());
CGContextSetFillColor(UIGraphicsGetCurrentContext(), CGColorGetComponents([UIColor redColor].CGColor));
CGContextFillRect(UIGraphicsGetCurrentContext(), rect);//rect from drawRect:
float scale = 0.7;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(50, 50, 100, 100), testImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(150, 50, 100, 100), testImage);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
With a 0.7 scale, the two images appear correctly tiled:
With a 0.777777 scale (changing line 6 to "float scale = 0.777777;"), the visual artifact appears:
Is there any way to avoid this problem? This happens with CGImage, CGLayer and primitive forms such as a rectangle. It also happens on MacOSx.
Thanks for the help!
edit: Added that this also happens with a primitive form, like CGContextFillRect
edit2: It also happens on MacOSx!
Quartz has a floating point coordinate system, so scaling may result in values that are not on pixel boundaries, resulting in visible antialiasing at the edges. If you don't want that, you have two options:
Adjust your scale factor so that all your scaled coordinates are integral. This may not always be possible, especially if you're drawing lots of things.
Disable anti-aliasing for your graphics context using CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), false);. This will result in crisp pixel boundaries, but anything but straight lines might not look very good.
When all is said and done, iOS is dealing with discrete pixels on integer boundaries. When your frames are reduced 0.7, the 50 is reduced to 35, right on a pixel boundary. At 0.777777 it is not - so iOS adapts and moves/shrinks/blends whatever.
You really have two choices. If you want to use scaling of the context, then round the desired value up or down so that it results in integral scaled frame values (your code shows 50 as the standard multiplication value.)
Otherwise, you can not scale the context, but scale the content one by one, and use CGIntegralRect to round all dimensions up or down as needed.
EDIT: If my suspicion is right, there is yet another option for you. Lets say you want a scale factor of .77777 and a frame of 50,50,100,100. You take the 50, multiply it by the scale, then round the return value up or down. Then you recompute the new frame by using that value divided by 0.7777 to get some fractional value, that when scaled by 0.7777 returns an integer. Quartz is really good at figuring out that you mean an integral value, so small rounding errors are ignored. I'd bet anything this will work just fine for you.