ID3DXLine strange behavior when line width is 1.0, on certain machines - line

I've had this problem on a couple of machines now - almost always laptops and I think usually those with Intel graphics chipsets, when using ID3DXLine.
I have some code that vaguely looks like this:
MyLine->SetWidth(MyLineThickness);
MyLine->SetPattern(MyLinePattern);
MyLine->Begin();
{
... Draw some lines with ->MyLine->Draw
}
MyLine->End();
With MyLine being a CComPtr (ID3DXLine). When MyLineThickness is 1.0, these machines draw thick lines (looking as if they're drawn with a felt-tip pen!). When I change MyLineThickness to 1.1, or 1.5, I then get nice thin lines. Obviously increasing that to around 8.f will give me thick lines again.
So ID3DXLine on these machines seems to do something really odd when thickness is 1.0. At < 1.f and > 1.f it seems to behave as you would expect!
Has anyone else experienced any strangeness in ID3DXLine? I'm using D3D 9.0c btw, alongside the Feb 2010 SDK.

According to DX9 documentation lines of thickness 1.0f are drawn using native hardware line drawing support if one exists. All other sizes are drawn by producing a pair of triangles and, hence, rendered via vertex shader. Try checking D3DCAPS9::LineCaps for supported capabilities.
After some playing with ID3DXLine I decided to use DrawPrimitives for drawing lines - it's probably a bit slower but at least you get same result on any system.

Related

Right edge of Metal Texture has anomalies

When I run this code on an integrated Intel GPU on a Macbook Pro, I have no problems. But when I run it on an iMac with an AMD GPU, this simple "Hello World" gives me artifacts along the right edge:
The shader is very simple:
kernel void helloworld(texture2d<float, access::write> outTexture [[texture(0)]],
uint2 gid [[thread_position_in_grid]])
{
outTexture.write(float4((float)gid.x/640,
(float)gid.y/360,0,1),
gid);
}
I've tried viewing the texture's contents in two different ways, and both are producing the problems:
Converting the texture to a CIImage and viewing it in an NSImageView, or calling getBytes and copying the pixel data directly and manually building a PNG out of it (skipping CIImage entirely). Either way produces this weird artifact, so it is indeed in the texture itself.
Any ideas what causes this kind of problem?
UPDATE:
Fascinating, the issue appears to be related to threadsPerThreadgroup but I'm not sure why it would be.
The above image was created with 24 threads per group. If I change this to 16, the artifacts move to the bottom edge instead.
What I don't understand about this is the gid position should be fixed regardless of what threadgroup is actually running, shouldn't it? Because that is the individual threads position in the whole image.
With dispatchThreadgroups(), the compute kernel can be invoked for grid positions outside of your width*height grid. You have to explicitly do nothing with something like:
if (gid.x >= 640 || gid.y >= 360)
return;
Otherwise, you will attempt to write outside of the bounds of the texture (with colors some of whose components are larger than 1). That has undefined results.
With dispatchThreads(), Metal takes care of this for you and won't invoke outside of your specified grid size.
The difference in behavior between 24 and 16 threads per group is whether that divides evenly into 640 and 360. Whichever doesn't divide evenly is the dimension which gets over-invoked.
Using macOS 10.13 or later, it is possible to let the OS figure some of these things out. I was using:
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
Doing this, I had to calculate the threadgroupsPerGrid myself, which proved to be the source of the problem.
By replacing this call with:
commandEncoder.dispatchThreads(MTLSize(width: Int(width), height: Int(height), depth: 1), threadsPerThreadgroup: threadsPerThreadgroup)
The issues went away.

Drawing horizontal or vertical lines won't get drawn on the surface. Vulkan

Have this problem of a line not getting drawn. But only it it's perfectly vertical or horizontal.
Say i have points A{ 400, 300 } and B{ 400, 200 }. Now if I try to draw a line between these points it doesn't find it's way on the screen.
If i however change the point A to be positioned at { 401, 300 } the program runs as i would intend it to run.
Is there a clear reason why straight horizontal or vertical lines doesn't get drawn? And a way to circumvent that? Don't wanna tilt all the straight lines.
Part of the pipeline-setup:
inputAssembly.topology = vk::PrimitiveTopology::eLineStrip;
rasterInfo.polygonMode = vk::PolygonMode::eLine;
using VulkanSDK 1.0.42.1 on intel igpu.
Edit:
Okay if i raise linewidth over 1.f on rasterizer it will draw. However not every gpu is capable of doing that. But atleast a temporal quirk.
(I know it's a very old post, but no answer provided. ;-)).
In case of the lines wider than 1.0f - You are allowed to draw such lines only if You enable wideLines feature and You can enable it only when it is supported by a physical device You are using. Without enabling this feature, You can't provide values other than 1.0 for line width during pipeline creation. So even if it works, it can only be a temporary solution because You are violating the specification.
This kind of behavior suggests it's a bug in a driver. Try the latest version (if You didn't do it already) and if it doesn't help contact Intel's support and submit a bug (if You didn't do it already ;-)).

opengl es 2.0 drawing imprecision

Im having a weird issue in opengl, it goes like this: im designing a 2d engine, so far i coded the routines that let's you draw sprites, rectangle, boxes, translate and scale them... however when i run a small demo of my engine i notice when scaling gradually rectangles in an animation (drawn using 4 vertices and GL_LINE_LOOP), the rectangle edeges seems to bounce between the two neighboring pixels.
I can't determine the source of the problem or even formulate a proper search query in google, if someone can shed some light on this matter. If my question is not understood please let me know.
Building a 2D library on OpenGL ES is going to be problematic for several reasons. First of all, the Khronos specifications state that it is not intended to produce "pixel perfect" rendering. Every OpenGL ES renderer is allowed some variation in rendered results. This is because the actual rendering is implemented in hardware and floating point rounding can be a little different from platform to platform. Even the shader compilers are completely different from one GPU to the next.
Another issue is that most of the GPUs on mobile devices today are tile-based deferred renderers, and they do not typically support partial screen rendering. In other words, every screen update requires replacing the entire frame.

How to stop OpenGL background bleed on transparent textures

I have an iOS OpenGL ES 2.0 3D game and am working to get transparent textures working nicely, in this particular example for a fence.
I'll start with the final result. The bits of green background/clear color are coming through around the edges of the fence - note how it isn't ALL edges and some of it is ok:
The reason for the lack of bleed in the top right is order of operations. As you can see from the following shots, the order of draw includes some buildings that get drawn BEFORE the fence. But most of it is after the fence:
So one solution is to always draw my transparent textured objects last. I would like to explore other solutions, as my pipeline might not always allow this. I'm looking for other suggestions to solve this problem without sorting my draws.
This is likely a depth or blend function, but i've tried a ton of stuff and nothing seems to work (different blend functions, different discard alpha levels, different background colors, different texture settings).
Here are some specifics of my implementation.
In my frag shader I'm throwing out fragments that have transparency - this way they won't render to depth:
lowp vec4 texVal = texture2D(sTexture, texCoord);
if(texVal.w < 0.5)
discard;
I'm using one giant PVR texture atlas with mipmapping - the texture itself SHOULD just have 0 or 1 for alpha, but something with the blending could be causing this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
I'm using the following blending when rendering:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Any suggestions to fix this bleed would be great!
EDIT - tried a different min filter for the texture as suggested in the comments, LINEAR/NEAREST, but same result. Note I have also tried NEAREST/NEAREST and no luck:
try increasing the alpha filter limit,
lowp vec4 texVal = texture2D(sTexture, texCoord);
if(texVal.w < 0.9)
discard;
I know this is an old question but I came across it several times whilst trying to find an answer to my very similar OpenGL issue. Thought I'd share my findings here for anyone with similar. The culprit in my code looked like this:
glClearColor(1, 0, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
I used a pink transparent colour for ease of visual reference whilst debugging. Despite the fact it was transparent when it was blending between background and the colour of the subject it would bleed in much like the symptoms in the screenshot of the question. What fixed it for me was wrapping this code to mask the glClear step. It looked like this:
glColorMask(false, false, false, true);
glClearColor(1, 0, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(true, true, true, true);
To my knowledge, this means when the clear process kicks in it only operates on the alpha channel. After this is was all re-enabled to continue the process as intended. If someone with a more solid knowledge of OpenGL can explain it better I'd love to hear!

Sensor Orientiation -> GLRotation doesn't work properly

I want to use the Android orientation sensor data for my GLES camera - giving it the rotation matrix. I found a very good example here:
How to use onSensorChanged sensor data in combination with OpenGL
but this is only working with GL1.0 and I need to work on it for GLES2.0. Using my own shaders, everything works, moving the camera manuall is fine. But the moment I use the rotation matrix like in the example, it doesn't really work.
I generate the rotation matrix with:
SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData);
My application is running in LANDSCAPe so I use that methode after (like in the example code):
float[] result = new float[16];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result);
return result;
It worked fine on my phone in his code but not in mine. My screen looks like that:
The rotation matrix seems to be rotated 90° to the right (almost as if I have forgotten to switch to landscape for my activity).
I was thinking of using the remap() method in a wrong way but in the example it makes sense, the camera movement works now. If I rotate to the left, the screen rotates to the left as well, even though, since everything is turned, it rotates "up" (compared to the ground, which is not on the bottom but on the right). It just looks like I made a wall instead of a ground but I'm sure my coordinates are right for the vertices.
I took a look ath the draw method for the GLSurface and I don't see what I might have done wrong here:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
MatrixStack.glLoadMatrix(sensorManager.getRotationMatrix()); // Schreibt die MVMatrix mit der ogn. Rotationsmatrix
GameRenderer.setPerspMatrix(); // Schreibt die Perspektivmatrix Uniform für GLES. Daran sollte es nicht liegen.
MatrixStack.mvPushMatrix();
drawGround();
MatrixStack.mvPopMatrix();
As I said, when moving my camera manually everything works perfect. So what is wrong with the rotation matrix I get?
Well, okay, it was a very old problem but now that I took a look at the code again I found the solution.
Having the phone in landscape I had to remap the axis using
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, R);
But that still didn't rotate the image - even though the mapping of the Y and -X Axis worked fine. So simply using
Matrix.rotateM(R, 0, 90, 1, 0, 0);
Does the job. Not really nicely but it works.
I know it was a very old question and I don't see why I made this mistake but perhaps anyone else has the same problem one day.
Hope this helps,
Tobias
If it is (was) working on a specific phone but not on yours, I guess the android version may play a role here. We faced the issue in mixare Augmented Reality Engine where a Surfaceview is superimposed over a Camera view. Pleas consider that the information here may not apply to your case since we are not using OpenGL.
Modern version of android will return a default orientation, whereas previously portrait was the default. You can check how we query this in the Compatibility class. This information is then used to apply different values to the RemapCoordinateSystem call check lines 739 and onwards of this file.
Mixare is as well using landscape mode by default, so I guess our values for the remapping should apply to your case just as well. As I said earlier, we are using the 3x3 matrices, since we are not using OpenGL, but I guess this should be the same for OpenGL compatible matrices.
Take time, play with the orientation matrix, you will find a column that contains useful vaule.
Besides Log vales for each column, see which one is useful, try quaternions, keep playing with values never try the code directly in renderer, first check the values
Because later, you will have more options for input like touch, there too you have to test the values, play with them, use sensitivity constants with matrices too