Why is drawing a circle procedurally slower than reading from a texture? - opengl-es-2.0

I am making a app where i need to draw a lot of circles in the screen, so i had the idea of replacing the texture of the triangles i am using to draw the texture with a function to draw a circle. However after testing, it turned out to be slower than picking the values off a texture (although the quality is vastly superior). Is it a problem with the way i produce the circle, or reading from a texture is really a lot faster? (about twice as fast)
new code:
precision mediump float;
uniform sampler2D u_Texture;
varying vec4 vColor;
varying vec2 vTexCoordinate;
void main() {
gl_FragColor = vec4(0, 0, 0, 0);
mediump float thing = vTexCoordinate.x * vTexCoordinate.x + vTexCoordinate.y * vTexCoordinate.y;
if(thing < 1.0 && thing > 0.9){
gl_FragColor = vec4(0, 0, 0, 1);
}
if(thing < 0.9){
gl_FragColor = vec4(1, 1, 1, 1) * vColor;
}
};
old code:
gl_FragColor = texture2D(u_Texture, vTexCoordinate) * vColor;
obs: i didn't bother to rename vTexCoordinate, so it now have a value of [-1, 1] where it was [0, 1]

Conditional branches are really expensive on the GPU, since there's no branch prediction, and probably other reasons too. Also, texture lookups latency can often be hidden under shader processing overhead, so it might actually work faster in the end. So it's best to avoid branches and loops in GLSL if you can.

Related

finding bounding box of centroid with limited information

I have detected blob keypoints in opencv c++. The centroid displays fine. How do I then draw a bounding box around the detected blob if I only have the blob center coordinates? I can't work backwards from center because of too many unknowns(or so I believe).
threshold(imageUndistorted, binary_image, 30, 255, THRESH_BINARY);
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blob
detector->detect(binary_image, binary_keypoints);
drawKeypoints(binary_image, binary_keypoints, bin_image_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
//draw BBox ?
What am I overlooking to draw the bounding box around the single blob?
I said:
I can't work backwards from center because of too many unknowns(or so I believe).
There is not limited information if blob size is used: keypoints.size which returns the diameter of the blob in question. Though there might be some inaccurate results with highly asymmetric or lopsided targets, this worked well for me b/c I used spheroid objects. Moments/ is probably the better approached for the asymmetrical targets.
keypoints.size should not be confused with keypoints.size(). The latter does a count in the vector of objects in my case the former is the diameter. Using both.
Using the diameter I can then calculate the rest with no problem:
float TLx = (ctr_x - r);
float TLy = (ctr_y - r);
float BRx = (ctr_x + r);
float Bry = (ctr_y + r);
Point TLp(TLx-10, TLy-10); //works fine without but more visible with enhancement
Point BRp(BRx+10, Bry+10); //same here
std::cout << "Top Left: " << TLp << std::endl << "Right Lower:" << BRp << std::endl;
cv::rectangle(bin_with_keypoints, TLp, BRp, cv::Scalar(0, 255, 0));
imshow("With Green Bounding Box:", bin_with_keypoints);
TLp = top left point with 10px adjustments to make box bigger.
BRp = bottom right point
TLx, TLy are calculated from blob center coordinates as well as BRps. If you are going to use multiple targets would suggest contours approach (with the moments). I have 1 - 2 blobs to keep track of which is a lot easier but keeps resource usage down.
Rectangle drawing function can also work with Rect (diameter = keypoint.size)
Rect r(TLp, BRp, center_x + diameter/2, center_y+diamter/2) // r(TLc, BRc, width, heigth)
cv::rectangle(bin_with_keypoints, rect, cv::Scalar(0, 255, 0));

How can i read data by glReadPixels?

I'm working on openGL es on Android.
Now i meet a problem. I defined a float array, which is used to pass to fragment shader.
float[] data = new float[texWidth*texHeight];
// test data
for (int i = 0; i < data.length; i++) {
data[i] = 0.123f;
}
1. initTexture:
glGenTextures...
glBindTexture...
glTexParameteri...
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, fb);
2.FBO:
glGenBuffers...
glBindFramebuffer...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texId, 0);
3.onDrawFrame:
glUseProgram(mProgram);...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);...
IntBuffer fb = BufferUtils.iBufferAllocateDirect(texWidth*texHeight);
glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, fb);
System.out.println(Integer.toHexString(fb.get(0)));
System.out.println(Integer.toHexString(fb.get(1)));
System.out.println(Integer.toHexString(fb.get(2)));
fragment shader:
precision mediump float;
uniform sampler2D sTexture;
varying vec2 vTexCoord;
void main()
{
tex = texture2D(sTexture, vTexCoord.st);
vec4 color = tex;
gl_FragColor = color;
}
So, how can i get the float data(0.123f, which i defined before) whith glReadPixels? Now what i get is ff000000(ABGR), so i suspect shader doesn't get the data through this way. Can someone tell me why and how can i deal with it? i am a newbie on it and really appreciate it.
Your main problem happens before glReadPixels(). The primary issue is with the way you use glTexImage2D():
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, fb);
The GL_UNSIGNED_BYTE value for the 8th argument specifies that the data passed in consists of unsigned bytes. However, the values in your buffer are floats. So your float values are interpreted as bytes, which can't possibly end well because they are completely different formats, with different sizes and memory layouts.
Now, you might be tempted to do this instead:
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_FLOAT, fb);
This would work in desktop OpenGL, which supports implicit format conversions as part of specifying texture data. But it is not supported in OpenGL ES. In ES 2.0, GL_FLOAT is not even a legal value for the format argument. In ES 3.0, it is legal, but only for internal formats that actually store floats, like GL_RGBA16F or GL_RGBA32F. It is an error to use it in combination with the GL_RGBA internal format (3rd argument).
So unless you use float textures in ES 3.0 (which consume much more memory), you need to convert your original data to bytes. If you have float values between 0.0 and 1.0, you can do that by multiplying them by 255, and rounding to the next integer.
Then you can read them back also as bytes with glReadPixels(), and should get the same values again.

Images and mask in OpenGL ES 2.0

I'm learning OpenGL ES 2.0 and I'd like to create an App to better understand how it works.
The App has a set of filter that the user can apply on images (I know, nothing new :P).
One of this filter takes two images and a mask and it mixes the two images showing them through the mask (here an image to better explain what I want to obtain)
At the moment I'm really confused and I don't know where to start to create this effect.
I can't understand wether I have to work with multiple textures and multiple FrameBuffers or I can just work with a single shader.
Do you have any hint to help me in doing this project?
EDIT--------
I've found this solution, but when I use as mask lines instead of circles the result is really "grungy", especially if lines are rotated.
precision highp float;
varying vec4 FragColor;
varying highp vec2 TexCoordOut;
uniform sampler2D textureA;
uniform sampler2D textureB;
uniform sampler2D mask;
void main(void){
vec4 mask_color = texture2D(mask, TexCoordOut);
if (mask_color.a > 0.0){
gl_FragColor = texture2D(textureA, TexCoordOut);
}else {
gl_FragColor = texture2D(textureB, TexCoordOut);
}
}
Is it probably better to use Stencil buffer or blending?
You can apply the mask in one line without using the costly if:
gl_FragColor = step( 0.5, vMask.r ) * vColor_1 + ( 1.0 - step( 0.5, vMask.r ) ) * vColor_2;
Or, better just interpolate between two colors:
gl_FragColor = mix( vColor_1, vColor_2, vMask.r );
In this case the mask can be smoothed (i.e. with Gaussian blur) to produce less aliasing. This will yield very good results compared to a single value thresholding.
There is no need for multiple shaders or framebuffers, just multiple texture units. Simply use 3 texture units which are all indexed by the same texture coordinates and use the Mask texture to select between the other two textures. The fragment shader would look like this:
uniform sampler2D uTextureUnit_1;
uniform sampler2D uTextureUnit_2;
uniform sampler2D uTextureMask;
varying vec2 vTextureCoordinates;
void main()
{
vec4 vColor_1 = texture2D(uTextureUnit_1, vTextureCoordinates);
vec4 vColor_2 = texture2D(uTextureUnit_2, vTextureCoordinates);
vec4 vMask = texture2D(uTextureMask, vTextureCoordinates);
if (vMask.r > 0.5)
gl_FragColor = vColor_1;
else
gl_FragColor = vColor_2;
}
You can see that using a third texture unit just to do a binary test on the Red channel is not very efficient, so it would be better to encode the mask into the alpha channels of Textures 1 or 2, but this should get you started.

Is there a way to set alpha channel color for sampler2D texture?

I am working on an app that uses OpenGL-ES to blend two textures. For overlay image, I want to specify a certain color as the alpha channel e.g.) green. How could this be accomplished? I tried using glBlendFunc without much success. Any help would be greatly appreciated!
there is no such feature in OpenGL itself, but maybe you could achieve that in shader:
uniform vec3 colorMask; // eg. green
varying vec2 texCoord;
uniform sampler2D overlay;
void main()
{
vec4 texture = texture2D(overlay, texCoord);
// get RGB of the texture
vec3 absdiff = abs(texture.rgb - colorMask);
float alpha = all(lessThan(absdiff, vec3(0.1)))? 1.0 : 0.0;
// calculate alpha using vector relational functions
texture.a = alpha; // use alpha
gl_FragColor = texture;
// write final color, can use blending
}
This works by calculating absolute difference of texture color and masking color, and comparing it to 0.1. It is pretty simple, but it might be slow (i'm writing it from memory, you have to test it).
Or you can use a different way of calculating alpha:
uniform vec3 colorMask; // eg. green
varying vec2 texCoord;
uniform sampler2D overlay;
void main()
{
vec4 texture = texture2D(overlay, texCoord);
// get RGB of the texture
vec3 diff = texture.rgb - colorMask;
float alpha = step(dot(diff, diff), 0.1);
// calculate alpha using difference squared and a step function
texture.a = alpha; // use alpha
gl_FragColor = texture;
// write final color, can use blending
}
This is using the square error metric to calculate alpha. It calculates distance of colors in RGB space, calculates it's square, and compares that to 0.1. This might be a little bit harder to tweak the threshold (0.1) but enables you to use soft threshold. If you were to have color gradient and wanted some colors to be more transparent than other, you can throw away step function and use smoothstep instead.

Can't correctly rotate cylinder in openGL to desired position

I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow: