Imgui text appears as white rectangles on my custom Vulkan renderer - vulkan

I am following Sascha Willems example on how to integrate imgui with a Vulkan renderer.
Trying to render this simple window:
ImGui::NewFrame();
ImGui::SetNextWindowSize(ImVec2(128, 64), ImGuiCond_FirstUseEver);
ImGui::Begin("Test Window", nullptr);
ImGui::Text("Test Text");
ImGui::End();
ImGui::Render();
Yields the following:
Unfortunately text appears as white rectangles.
I examined the rendering process against Sascha Willem code using Renderdoc but haven't yet found what went wrong.
Here is a link to the renderdoc file and relevant code files.
From my examination, it seems that vertex data and textures are loaded correctly. but the two samples have a slightly different font texture and as a result, slightly different uv coordinates per vertex. But nothing to explain why the textured text quads appear completely white. I guess the difference in the font texture emanate from the fact that I use a different imgui version (latest docking branch from vcpkg).
Does anyone have an idea what could lead to such result?
I would be happy to share more info if you think it is relevant (code, renderdoc files) but I wish to avoid clutter this post with too much code.

The issue was caused due to disabled blending.
Apparently, the imgui font texture has the value (1.0f, 1.0f, 1.0f, 0.0f) where the text quad should be blank (and not (0.0f, 0.0f, 0.0f, 0.0f). If blending is not enabled, then the alpha channel will be ignored which will cause the text quad to appear white.

Related

How to stop OpenGL background bleed on transparent textures

I have an iOS OpenGL ES 2.0 3D game and am working to get transparent textures working nicely, in this particular example for a fence.
I'll start with the final result. The bits of green background/clear color are coming through around the edges of the fence - note how it isn't ALL edges and some of it is ok:
The reason for the lack of bleed in the top right is order of operations. As you can see from the following shots, the order of draw includes some buildings that get drawn BEFORE the fence. But most of it is after the fence:
So one solution is to always draw my transparent textured objects last. I would like to explore other solutions, as my pipeline might not always allow this. I'm looking for other suggestions to solve this problem without sorting my draws.
This is likely a depth or blend function, but i've tried a ton of stuff and nothing seems to work (different blend functions, different discard alpha levels, different background colors, different texture settings).
Here are some specifics of my implementation.
In my frag shader I'm throwing out fragments that have transparency - this way they won't render to depth:
lowp vec4 texVal = texture2D(sTexture, texCoord);
if(texVal.w < 0.5)
discard;
I'm using one giant PVR texture atlas with mipmapping - the texture itself SHOULD just have 0 or 1 for alpha, but something with the blending could be causing this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
I'm using the following blending when rendering:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Any suggestions to fix this bleed would be great!
EDIT - tried a different min filter for the texture as suggested in the comments, LINEAR/NEAREST, but same result. Note I have also tried NEAREST/NEAREST and no luck:
try increasing the alpha filter limit,
lowp vec4 texVal = texture2D(sTexture, texCoord);
if(texVal.w < 0.9)
discard;
I know this is an old question but I came across it several times whilst trying to find an answer to my very similar OpenGL issue. Thought I'd share my findings here for anyone with similar. The culprit in my code looked like this:
glClearColor(1, 0, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
I used a pink transparent colour for ease of visual reference whilst debugging. Despite the fact it was transparent when it was blending between background and the colour of the subject it would bleed in much like the symptoms in the screenshot of the question. What fixed it for me was wrapping this code to mask the glClear step. It looked like this:
glColorMask(false, false, false, true);
glClearColor(1, 0, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(true, true, true, true);
To my knowledge, this means when the clear process kicks in it only operates on the alpha channel. After this is was all re-enabled to continue the process as intended. If someone with a more solid knowledge of OpenGL can explain it better I'd love to hear!

Collision detection of uneven shapes in iOS

I am working on drag and drop activity for iPad. I have a rectangle PNG image (see the image named as obj2). When I drag obj1 only on the black portion of the rectangle then it should react.
if (CGRectIntersectsRect(obj1.frame, obj2.frame))
{
NSLog(#" hit test done!! ");
}
Right now, this piece of code takes hit test even on the transparent area. How to prevent that to happen?
For something as simple as your specific example (triangle and circle), the link that David Rönnqvist gives is very useful. You should definitely look at it to see some available tools. But for the general case, the best bet is clipping, drawing, and searching.
For some background, see Clipping a CGRRect to a CGPath.
First, create an alpha-only bitmap image. This is explained in the above link.
Next, clip your context to one of your images using CGContextClipToMask().
Now, draw your other image onto the context.
Finally, search the bitmap data for any colored pixels (see the above link for example code).
If any of the pixels colored, then there is some overlap.
Another, similar approach (which might actually be faster), is to draw each image into its own alpha-only CGBitmapContext. Then walk the pixels in each context and see if they ever are both >128 at the same time.

Why is line width in CoreGraphics on retina display rendered half width?

My process looks like this:
define a rectangle I want to draw in, using point dimensions.
define CGFloat scale = [[UIScreen mainsScreen] scale]
Multiply the rectangle's size by the scale
Create an image context of the rectangle size using CGBitmapContextCreate
Draw within the image context
call CGBitmapContextCreateImage
call UIImage imageWithCGImage:scale:orientation: with the appropriate scale.
I had thought this has always resulted in perfect images on both retina and and older screens, but haven't been paying close attention to the line contrast/thickness. Generally, the strokes have a high contrast to the fill so I didn't paid attention until now, with low contrast between a line and fill.
I think perhaps I'm misunderstanding the user space, but I thought it was simply a direct conversion through the scaling, and transforms applied. There are no scaling and transforms applied in my particular case except for the retina screen double scaling.
Trying to render a 2-pixel line rather than 1-pixel is easier to explain: when I call
UIContextSetLineWidth(context, 2), the line is rendered as 1 pixel thick on the retina simulator. 1 pixel! But this should be two pixels, on a retina display.
UIContextSetLineWidth(context, 2 * scale) produces a line that is two pixels wide on a retina screen, but I'm expecting it to be 4 pixels.
UIContextSetLineWidth(context, 1) produces a 1-pixel wide line that is partly transparent. I understand about the stroke straddling the path, so I prefer talking in terms of 2-pixel-wide strokes and the paths being on pixel boundaries.
I need to understand why the rendered line width is being divided in half.
My fault. 99% of my own bugs I solve on my own just after I post publicly about it.
The drawing code includes CGContextClip after constructing and copying a path. After that, a fill may be applied, gradient or otherwise, then the line drawn, so everything is nice and tidy. I was focusing on the math and specific drawing code, and did not notice the clipping line, but that would effectively halve the stroke width. Normally I catch logic bugs like this immediately, but because it was posted to SO, it's appropriate the answer is here too.

Strange thin line or dots at the bottom of my opengl texture

I have made an app similar to this one: http://www.youtube.com/watch?v=U2uH-jrsSxs (the sound is a bit loud and bad). The problem is there is a very thin line/dots/whatever appearing at the bottom of every texture. It is almost unnoticeable but it is there and I have no idea why. My texture size is 256x256. I tested earliear with a texture size 128x128 I THINK there was nothing there but not sure. It's not such a big deal as it is very thin but I find it annoying. Here is a screenshot. I have selected with RED those lines. I'm a noob at OpenGL(ES) so probably I did something wrong. Any help is appreciated.
This will be due to OpenGL tiling the texture to fill the specified area. So the thin line you are seeing will be the very top of that texture just starting to repeat again.
To avoid it, tell the texture to CLAMP, rather than REPEAT (repeat being synonymous with tiling). Textures repeat by default, so you will want a line something like this:
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
If you're this way inclined, there is also a no-code-involved bodge way around it. Simply edit your source graphics so that no pixels are present in the top or left edges. So move the whole lot down one pixel and right one pixel inside its canvas. But then of course you will need to adjust your coordinates if you want the images to appear in exactly the same place.

How To Draw More Precise Lines using Core Graphics and CALayer

Hello I am having a hard time making this UI element look the way I want (see screenshot). Notice the image on the right--how the line width and darkness looks inconsistent compared to the image on the left (which happens to be a screen grab from safari) where the border width is more consistent. How does apple make their lines so perfect?
I'm using a CALayer and the Core Graphics API to draw the image on the right. Is it possible to draw such perfect lines with the standard apis?
The problem with drawing a 1-pixel path is that Quartz draws paths on an exact point grid, starting from {0,0}. This means that if you stroke a vertical path starting at {10,10} with a 1-point width, half of that line will render in the pixel to the left of the coordinate and half in the pixel to the right, causing a blurring effect.
You should therefore shift your drawing by {0.5,0.5} if you want lines to draw on exact pixels.
You can definitely draw what you want with Quartz.
Apple uses images for the tab elements.