Images and mask in OpenGL ES 2.0 - opengl-es-2.0

I'm learning OpenGL ES 2.0 and I'd like to create an App to better understand how it works.
The App has a set of filter that the user can apply on images (I know, nothing new :P).
One of this filter takes two images and a mask and it mixes the two images showing them through the mask (here an image to better explain what I want to obtain)
At the moment I'm really confused and I don't know where to start to create this effect.
I can't understand wether I have to work with multiple textures and multiple FrameBuffers or I can just work with a single shader.
Do you have any hint to help me in doing this project?
EDIT--------
I've found this solution, but when I use as mask lines instead of circles the result is really "grungy", especially if lines are rotated.
precision highp float;
varying vec4 FragColor;
varying highp vec2 TexCoordOut;
uniform sampler2D textureA;
uniform sampler2D textureB;
uniform sampler2D mask;
void main(void){
vec4 mask_color = texture2D(mask, TexCoordOut);
if (mask_color.a > 0.0){
gl_FragColor = texture2D(textureA, TexCoordOut);
}else {
gl_FragColor = texture2D(textureB, TexCoordOut);
}
}
Is it probably better to use Stencil buffer or blending?

You can apply the mask in one line without using the costly if:
gl_FragColor = step( 0.5, vMask.r ) * vColor_1 + ( 1.0 - step( 0.5, vMask.r ) ) * vColor_2;
Or, better just interpolate between two colors:
gl_FragColor = mix( vColor_1, vColor_2, vMask.r );
In this case the mask can be smoothed (i.e. with Gaussian blur) to produce less aliasing. This will yield very good results compared to a single value thresholding.

There is no need for multiple shaders or framebuffers, just multiple texture units. Simply use 3 texture units which are all indexed by the same texture coordinates and use the Mask texture to select between the other two textures. The fragment shader would look like this:
uniform sampler2D uTextureUnit_1;
uniform sampler2D uTextureUnit_2;
uniform sampler2D uTextureMask;
varying vec2 vTextureCoordinates;
void main()
{
vec4 vColor_1 = texture2D(uTextureUnit_1, vTextureCoordinates);
vec4 vColor_2 = texture2D(uTextureUnit_2, vTextureCoordinates);
vec4 vMask = texture2D(uTextureMask, vTextureCoordinates);
if (vMask.r > 0.5)
gl_FragColor = vColor_1;
else
gl_FragColor = vColor_2;
}
You can see that using a third texture unit just to do a binary test on the Red channel is not very efficient, so it would be better to encode the mask into the alpha channels of Textures 1 or 2, but this should get you started.

Related

Can you change the bounds of a Sampler in a Metal Shader?

In the fragment function of a Metal Shader file, is there a way to redefine the "bounds" of the texture with respect to what the sample will consider it's normalized coordinates to be?
By default, a value of 0,0 for the sample is the top-left "pixel" and 1,1 is the bottom right "pixel" of the texture. However, I'm re-using textures for drawing and at any given render pass there's only a portion of the texture that contains the relevant data.
For example, in a texture of width: 500 and height: 500, I might have only copied data into the region of 0,0,250,250. In my fragment function, I'd like the sampler to interpret a normalized coordinate of 1.0 to be 250 and not 500. Is that possible?
I realize I can just change the sampler to use pixel addressing, but that comes with a few restrictions as noted in the Metal Shader Specification.
No, but if you know the region you want to sample from, it's quite easy to do a little math in the shader to fix up your sampling coordinates. This is used often with texture atlases.
Suppose you have an image that's 500x500 and you want to sample the bottom-right 125x125 region (just to make things more interesting). You could pass this sampling region in as a float4, storing the bounds as (left, top, width, height) in the xyzw components. In this case, the bounds would be (375, 375, 125, 125). Your incoming texture coordinates are "normalized" with respect to this square. The shader simply scales and biases these coordinates into texel coordinates, then normalizes them to the dimensions of the whole texture:
fragment float4 fragment_main(FragmentParams in [[stage_in]],
texture2d<float, access::sample> tex2d [[texture(0)]],
sampler sampler2d [[sampler(0)]],
// ...
constant float4 &spriteBounds [[buffer(0)]])
{
// original coordinates, normalized with respect to subimage
float2 texCoords = in.texCoords;
// texture dimensions
float2 texSize = float2(tex2d.get_width(), tex2d.get_height());
// adjusted texture coordinates, normalized with respect to full texture
texCoords = (texCoords * spriteBounds.zw + spriteBounds.xy) / texSize;
// sample color at modified coordinates
float4 color = tex2d.sample(sampler2d, texCoords);
// ...
}

Why is drawing a circle procedurally slower than reading from a texture?

I am making a app where i need to draw a lot of circles in the screen, so i had the idea of replacing the texture of the triangles i am using to draw the texture with a function to draw a circle. However after testing, it turned out to be slower than picking the values off a texture (although the quality is vastly superior). Is it a problem with the way i produce the circle, or reading from a texture is really a lot faster? (about twice as fast)
new code:
precision mediump float;
uniform sampler2D u_Texture;
varying vec4 vColor;
varying vec2 vTexCoordinate;
void main() {
gl_FragColor = vec4(0, 0, 0, 0);
mediump float thing = vTexCoordinate.x * vTexCoordinate.x + vTexCoordinate.y * vTexCoordinate.y;
if(thing < 1.0 && thing > 0.9){
gl_FragColor = vec4(0, 0, 0, 1);
}
if(thing < 0.9){
gl_FragColor = vec4(1, 1, 1, 1) * vColor;
}
};
old code:
gl_FragColor = texture2D(u_Texture, vTexCoordinate) * vColor;
obs: i didn't bother to rename vTexCoordinate, so it now have a value of [-1, 1] where it was [0, 1]
Conditional branches are really expensive on the GPU, since there's no branch prediction, and probably other reasons too. Also, texture lookups latency can often be hidden under shader processing overhead, so it might actually work faster in the end. So it's best to avoid branches and loops in GLSL if you can.

How to render 3d texture data with 2d textures in OpenGL ES 2.0?

OpenGL ES 2.0 has support for 3D textures via extensions. Unfortunately this type of extension is not supported on many devices. So what I'm trying to do is use 2D textures instead of 3D textures. Firstly, I've fetched the 3D texture data to an atlas of 2D textures. For example , instead of having a 3D texture with 128x128x4 I will have a 2D texture atlas that contains 4 2D textures (128x128). The fragment shader will look something like this:
precision mediump float;
uniform sampler2D s_texture;
uniform vec2 2DTextureSize;
uniform vec3 3DTextureSize;
varying vec3 texCoords;
vec2 To2DCoords(vec3 coords)
{
float u = coords.x + 3DTextureSize.x*(coords.z - 2DTextureSize.x *floor(coords.z/2DTextureSize.x));
float v = coords.y + 3DTextureSize.y*floor(coords.x/2DTextureSize.x);
return vec2(u,v);
}
void main()
{
gl_FragColor = texture2D(s_texture,To2DCoords(texCoords));
}
The method To2DCoords is inspired by an algorithm found at https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch29.html.
The problem is that at render everything is messed up, compared with the 3D texture. So what am I doing wrong?
According to your code, the input and output of the To2DCoords() should be in pixel coordinates(0~255), not in texture coordinates(0~1.0), when the texture size is 256x256, for example.
Your code should look like:
gl_FragColor = texture2D(s_texture,To2DCoords(texCoords*3DTextureSize)/2DTextureSize);

Is there a way to set alpha channel color for sampler2D texture?

I am working on an app that uses OpenGL-ES to blend two textures. For overlay image, I want to specify a certain color as the alpha channel e.g.) green. How could this be accomplished? I tried using glBlendFunc without much success. Any help would be greatly appreciated!
there is no such feature in OpenGL itself, but maybe you could achieve that in shader:
uniform vec3 colorMask; // eg. green
varying vec2 texCoord;
uniform sampler2D overlay;
void main()
{
vec4 texture = texture2D(overlay, texCoord);
// get RGB of the texture
vec3 absdiff = abs(texture.rgb - colorMask);
float alpha = all(lessThan(absdiff, vec3(0.1)))? 1.0 : 0.0;
// calculate alpha using vector relational functions
texture.a = alpha; // use alpha
gl_FragColor = texture;
// write final color, can use blending
}
This works by calculating absolute difference of texture color and masking color, and comparing it to 0.1. It is pretty simple, but it might be slow (i'm writing it from memory, you have to test it).
Or you can use a different way of calculating alpha:
uniform vec3 colorMask; // eg. green
varying vec2 texCoord;
uniform sampler2D overlay;
void main()
{
vec4 texture = texture2D(overlay, texCoord);
// get RGB of the texture
vec3 diff = texture.rgb - colorMask;
float alpha = step(dot(diff, diff), 0.1);
// calculate alpha using difference squared and a step function
texture.a = alpha; // use alpha
gl_FragColor = texture;
// write final color, can use blending
}
This is using the square error metric to calculate alpha. It calculates distance of colors in RGB space, calculates it's square, and compares that to 0.1. This might be a little bit harder to tweak the threshold (0.1) but enables you to use soft threshold. If you were to have color gradient and wanted some colors to be more transparent than other, you can throw away step function and use smoothstep instead.

Fragment-shader blur ... how does this work?

uniform sampler2D sampler0;
uniform vec2 tc_offset[9];
void blur()
{
vec4 sample[9];
for(int i = 0; i < 9; ++i)
sample[i] = texture2D(sampler0, gl_TexCoord[0].st + tc_offset[i]);
gl_FragColor = (sample[0] + (2.0 * sample[1]) + sample[2] +
(2.0 * sample[3]) + sample[4] + 2.0 * sample[5] +
sample[6] + 2.0 * sample[7] + sample[8] ) / 13.0;
}
How does the sample[i] = texture2D(sample0, ...) line work?
It seems like to blur an image, I have to first generate the image, yet here, I'm somehow trying to query the very iamge I'm generating. How does this work?
It applies a blur kernel to the image. tc_offset needs to be properly initialized by the application to form a 3x3 area of sampling points around the actual texture coordinate:
0 0 0
0 x 0
0 0 0
(assuming x is the original coordinate). The offset for the upper-left sampling point would be -1/width,-1/height. The offset for the center point needs to be carefully aligned to texel center (the off-by-0.5 problem). Also, the hardware bilinear filter can be used to cheaply increase the amount of blur (by sampling between texels).
The rest of the shader scales the samples by their distance. Usually, this is precomputed as well:
for(int i = 0; i < NUM_SAMPLES; ++i) {
result += texture2D(sampler,texcoord+offsetscaling[i].xy)*offsetscaling[i].z;
}
One way is to generate your original image to render to a texture, not to the screen.
And then you draw a full screen quad using this shader and the texture as it's input to post-process the image.
As you note, in order to make a blurred image, you first need to make an image, and then blur it. This shader does (just) the second step, taking an image that was generated previously and blurring it. There needs to be additional code elsewhere to generate the original non-blurred image.