When I run this code on an integrated Intel GPU on a Macbook Pro, I have no problems. But when I run it on an iMac with an AMD GPU, this simple "Hello World" gives me artifacts along the right edge:
The shader is very simple:
kernel void helloworld(texture2d<float, access::write> outTexture [[texture(0)]],
uint2 gid [[thread_position_in_grid]])
{
outTexture.write(float4((float)gid.x/640,
(float)gid.y/360,0,1),
gid);
}
I've tried viewing the texture's contents in two different ways, and both are producing the problems:
Converting the texture to a CIImage and viewing it in an NSImageView, or calling getBytes and copying the pixel data directly and manually building a PNG out of it (skipping CIImage entirely). Either way produces this weird artifact, so it is indeed in the texture itself.
Any ideas what causes this kind of problem?
UPDATE:
Fascinating, the issue appears to be related to threadsPerThreadgroup but I'm not sure why it would be.
The above image was created with 24 threads per group. If I change this to 16, the artifacts move to the bottom edge instead.
What I don't understand about this is the gid position should be fixed regardless of what threadgroup is actually running, shouldn't it? Because that is the individual threads position in the whole image.
With dispatchThreadgroups(), the compute kernel can be invoked for grid positions outside of your width*height grid. You have to explicitly do nothing with something like:
if (gid.x >= 640 || gid.y >= 360)
return;
Otherwise, you will attempt to write outside of the bounds of the texture (with colors some of whose components are larger than 1). That has undefined results.
With dispatchThreads(), Metal takes care of this for you and won't invoke outside of your specified grid size.
The difference in behavior between 24 and 16 threads per group is whether that divides evenly into 640 and 360. Whichever doesn't divide evenly is the dimension which gets over-invoked.
Using macOS 10.13 or later, it is possible to let the OS figure some of these things out. I was using:
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
Doing this, I had to calculate the threadgroupsPerGrid myself, which proved to be the source of the problem.
By replacing this call with:
commandEncoder.dispatchThreads(MTLSize(width: Int(width), height: Int(height), depth: 1), threadsPerThreadgroup: threadsPerThreadgroup)
The issues went away.
Related
I made a mesh from a Digital Elevation Map that spanned 1x1 degree box of geography, but when I scale the mesh up to 11139m in blender I get these visible jagged shadows on the peaks of the mesh. I'd prefer to not scale everything down but I suppose I can, it just seems like a strange issue I want to better understand.
My goal is to use the landscape in a WebVR application, but when I put this mesh into an Aframe scene it also has this issue. Thanks for any tips!
Quick answer:
I think this may be caused by the clipping start/end values. Also called near/far clipping planes. Adjusting them may fix the issue but also limit the rendering distance.
Longer explanation:
Take a look at this:
It's a simple grayscale, but imagine it is scaled across your entire scene depth (Z depth buffer). The range of this buffer is set by the start/stop clipping (near/far) camera setting.
By default Blender has its start/stop (near/far) clipping set to 0.01 - 1000.
While A-Frame has it like 0.005 - 10000. You may find more information here: A-Frame camera #properties
That means the renderer has to somehow fit every single point in that range somewhere on the grayscale. That may cause overlapping or Z-fighting because it is simply lacking precision to distinguish the details. And that is mainly visible at edges/peaks because the polygons are connected there at acute angles and the program has to round up the Z-values. That causes overlapping visible as darker shadows (most likely the backside of the polygon behind).
You may also want to read more about Z-fighting because it is somewhat related.
Example
I am having strange artifacts on a tiledmap while scrolling with the camera clamped on the player (who is a box2d-Body).
Before getting this issue i used the linear filter for the tiledmap which prevents those strange artifacts from happening but results in Texture bleeding (i loaded the tiledmap straight from a .tmx file without padding the tiles).
However now i am using the Nearest filter instead which gets rid of the bleeding but when scrolling the map (by walking the character with the cam clamped on him) it seams like a lot of pixel are flickering around. The flickering results can get better or worse depending on the cameras zoom value.
But when I use the "OrthoCamController" class from the libgdx-Utilities which allows to scroll the map by panning with the mouse/finger i don't get these artifacts at all.
I assume that the flickering might be caused by bad camera-position values received by the box2d-Body's position.
One more thing i should add here: The game instance runs in 1280*720 display mode while my gamecam renders only 800*480. Wen i change the gamecam's rendersolution to 1280*720 i don't get those artifacts but then the tiles are way too tiny.
Has anyone experienced this issue or knows how to fix that? :)
I had a similar problem with this, and found it was due to having too small a decimal value for the camera position.
I think what may be happening is some sort of rounding with certain tile columns/rows in the tilemap renderer.
I fixed this by rounding to a set accuracy, like so:
camera.position.x = Math.round(player.entity.getX() * scalePosition) / scalePosition;
Experiment with various values, but I got it working by using the tile size as the scalePosition value.
About tilesets, I posted a solution here: Getting gaps between tiled textures with libgdx
I've been using that method with Tiled itself. You will have to adjust "margin" and "spacing" when importing tilesets in Tiled to get the effect working.
It's 100% working for me :)
I want to use the Android orientation sensor data for my GLES camera - giving it the rotation matrix. I found a very good example here:
How to use onSensorChanged sensor data in combination with OpenGL
but this is only working with GL1.0 and I need to work on it for GLES2.0. Using my own shaders, everything works, moving the camera manuall is fine. But the moment I use the rotation matrix like in the example, it doesn't really work.
I generate the rotation matrix with:
SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData);
My application is running in LANDSCAPe so I use that methode after (like in the example code):
float[] result = new float[16];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result);
return result;
It worked fine on my phone in his code but not in mine. My screen looks like that:
The rotation matrix seems to be rotated 90° to the right (almost as if I have forgotten to switch to landscape for my activity).
I was thinking of using the remap() method in a wrong way but in the example it makes sense, the camera movement works now. If I rotate to the left, the screen rotates to the left as well, even though, since everything is turned, it rotates "up" (compared to the ground, which is not on the bottom but on the right). It just looks like I made a wall instead of a ground but I'm sure my coordinates are right for the vertices.
I took a look ath the draw method for the GLSurface and I don't see what I might have done wrong here:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
MatrixStack.glLoadMatrix(sensorManager.getRotationMatrix()); // Schreibt die MVMatrix mit der ogn. Rotationsmatrix
GameRenderer.setPerspMatrix(); // Schreibt die Perspektivmatrix Uniform für GLES. Daran sollte es nicht liegen.
MatrixStack.mvPushMatrix();
drawGround();
MatrixStack.mvPopMatrix();
As I said, when moving my camera manually everything works perfect. So what is wrong with the rotation matrix I get?
Well, okay, it was a very old problem but now that I took a look at the code again I found the solution.
Having the phone in landscape I had to remap the axis using
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, R);
But that still didn't rotate the image - even though the mapping of the Y and -X Axis worked fine. So simply using
Matrix.rotateM(R, 0, 90, 1, 0, 0);
Does the job. Not really nicely but it works.
I know it was a very old question and I don't see why I made this mistake but perhaps anyone else has the same problem one day.
Hope this helps,
Tobias
If it is (was) working on a specific phone but not on yours, I guess the android version may play a role here. We faced the issue in mixare Augmented Reality Engine where a Surfaceview is superimposed over a Camera view. Pleas consider that the information here may not apply to your case since we are not using OpenGL.
Modern version of android will return a default orientation, whereas previously portrait was the default. You can check how we query this in the Compatibility class. This information is then used to apply different values to the RemapCoordinateSystem call check lines 739 and onwards of this file.
Mixare is as well using landscape mode by default, so I guess our values for the remapping should apply to your case just as well. As I said earlier, we are using the 3x3 matrices, since we are not using OpenGL, but I guess this should be the same for OpenGL compatible matrices.
Take time, play with the orientation matrix, you will find a column that contains useful vaule.
Besides Log vales for each column, see which one is useful, try quaternions, keep playing with values never try the code directly in renderer, first check the values
Because later, you will have more options for input like touch, there too you have to test the values, play with them, use sensitivity constants with matrices too
I'm working on a fork of Pleasant3D.
When rotating an object being displayed the object always rotates around the same point relative to to itself even if that point is not at the center of the view (e.g. because the user has panned to move the object in the view).
I would like to change this so that the view always rotates the object around the point at the center of the view as it appears to the user instead of the center of the object.
Here is the core of the current code that rotates the object around its center (slightly simplified) (from here):
glLoadIdentity();
// midPlatform is the offset to reach the "middle" of the object (or more specifically the platform on which the object sits) in the x/y dimension.
// This the point around which the view is currently rotated.
Vector3 *midPlatform = [self.currentMachine calcMidBuildPlatform];
glTranslatef((GLfloat)cameraTranslateX - midPlatform.x,
(GLfloat)cameraTranslateY - midPlatform.y,
(GLfloat)cameraOffset);
// trackBallRotation and worldRotation come from trackball.h/c which appears to be
// from an Apple OpenGL sample.
if (trackBallRotation[0] != 0.0f) {
glRotatef (trackBallRotation[0], trackBallRotation[1], trackBallRotation[2], trackBallRotation[3]);
}
// accumlated world rotation via trackball
glRotatef (worldRotation[0], worldRotation[1], worldRotation[2], worldRotation[3]);
glTranslatef(midPlatform.x, midPlatform.y, 0.);
// Now draw object...
What transformations do I need to apply in what order to get the effect I desire?
Some of what I've tried so far
As I understand it this is what the current code does:
"OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex" (from here). This means that the first transformation to be applied is actually the last one in the code above. It moves the center of the view (0,0) to the center of the object.
This point is then used as the center of rotation for the next two transformations (the rotations).
Finally the midPlatform translation is done in reverse to move the center back to the original location and the XY translations (panning) done by the user is applied. Here also the "camera" is moved away from the object to the proper location (indicated by cameraOffset).
This seems straightforward enough. So what I need to change is instead of translating the center of the view to the center of the object (midPlatform) I need to translate it to the current center of the view as seen by the user, right?
Unfortunately this is where the transformations start affecting each other in interesting ways and I am running into trouble.
I tried changing the code to this:
glLoadIdentity();
glTranslatef(0,
0,
(GLfloat)cameraOffset);
if (trackBallRotation[0] != 0.0f) {
glRotatef (trackBallRotation[0], trackBallRotation[1], trackBallRotation[2], trackBallRotation[3]);
}
// accumlated world rotation via trackball
glRotatef (worldRotation[0], worldRotation[1], worldRotation[2], worldRotation[3]);
glTranslatef(cameraTranslateX, cameraTranslateY, 0.);
In other words, I translate the center of the view to the previous center, rotate around that, and then apply the camera offset to move the camera away to the proper position. This makes the rotation behave exactly the way I want it to, but it introduces a new issue. Now any panning done by the user is relative to the object. For example if the object is rotated so that the camera is looking along the X axis end-on, if the user pans left to right the object appears to be moving closer/further from the user instead of left or right.
I think I can understand why the is (XY camera translations being applied before rotation), and I think what I need to do is figure out a way to cancel out the translation from before the rotation after the rotation (to avoid the weird panning effect) and then to do another translation which translates relative to the viewer (eye coordinate space) instead of the object (object coordinate space) but I'm not sure exactly how to do this.
I found what I think are some clues in the OpenGL FAQ(http://www.opengl.org/resources/faq/technical/transformations.htm), for example:
9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?
If you rotate an object around its Y-axis, you'll find that the X- and Z-axes rotate with the object. A subsequent rotation around one of these axes rotates around the newly transformed axis and not the original axis. It's often desirable to perform transformations in a fixed coordinate system rather than the object’s local coordinate system.
The root cause of the problem is that OpenGL matrix operations postmultiply onto the matrix stack, thus causing transformations to occur in object space. To affect screen space transformations, you need to premultiply. OpenGL doesn't provide a mode switch for the order of matrix multiplication, so you need to premultiply by hand. An application might implement this by retrieving the current matrix after each frame. The application multiplies new transformations for the next frame on top of an identity matrix and multiplies the accumulated current transformations (from the last frame) onto those transformations using glMultMatrix().
You need to be aware that retrieving the ModelView matrix once per frame might have a detrimental impact on your application’s performance. However, you need to benchmark this operation, because the performance will vary from one implementation to the next.
And
9.120 How do I find the coordinates of a vertex transformed only by the ModelView matrix?
It's often useful to obtain the eye coordinate space value of a vertex (i.e., the object space vertex transformed by the ModelView matrix). You can obtain this by retrieving the current ModelView matrix and performing simple vector / matrix multiplication.
But I'm not sure how to apply these in my situation.
You need to transform/translate "center of view" point into origin, rotate, then invert that translation, back to the object's transform. This is known as a basis change in linear algebra.
This is way easier to work with if you have a proper 3d-math library (I'm assuming you do have one), and that also helps to to stay far from the deprecated fixed-pipeline APIs. (more on that later).
Here's how I'd do it:
Find the transform for the center of view point in world coordinates (figure it out, then draw it to make sure it's correct, with x,y,z axis too, since the axii are supposed to be correct w.r.t. the view). If you use the center-of-view point and the rotation (usually the inverse of the camera's rotation), this will be a transform from world origin to the view center. Store this in a 4x4 matrix transform.
Apply the inverse of the above transform, so that it becomes the origin. glMultMatrixfv(center_of_view_tf.inverse());
Rotate about this point however you want (glRotate())
Transform everything back to world space (glMultMatrixfv(center_of_view_tf);)
Apply object's own world transform (glTranslate/glRotate or glMultMatrix) and draw it.
About the fixed function pipeline
Back in the old days, there were separate transistors for transforming a vertex (or it's texture coordinates), computing where light was in relation to it applying lights (up to 8) and texturing fragments in many different ways. Simply, glEnable(), enabled fixed blocks of silicon to do some computation in the hardware graphics pipeline. As performance grew, die sized shrunk and people demanded more features, the amount of dedicated silicon grew too, and much of it wasn't used.
Eventually, it got so advanced that you could program it in rather obscene ways (register combiners anyone). And then, it became feasible to actually upload a small assembler program for all vertex-level transforms. Then, it made to sense to keep a lot of silicon there that just did one thing (especially as you could've used those transistors to make the programmable stuff faster), so everything became programmable. If "fixed function" rendering was called for, the driver just converted the state (X lights, texture projections, etc) to shader code and uploaded that as a vertex shader.
So, currently, where even the fragment processing is programmable, there is just a lot of fixed-function options that is used by tons and tons of OpenGL applications, but the silicon on the GPU just runs shaders (and lots of it, in parallell).
...
To make OpenGL more efficient, and the drivers less bulky, and the hardware simpler and useable on mobile/console devices and to take full advantage of the programmable hardware that OpenGL runs on these days, many functions in the API are now marked deprecated. They are not available on OpenGL ES 2.0 and beyond (mobile) and you won't be getting the best performance out of them even on desktop systems (where they will still be in the driver for ages to come, serving equally ancient code bases originating back to the dawn of accelerated 3d graphics)
The fixed-functionness mostly concerns how transforms/lighting/texturing etc. are done by "default" in OpenGL (i.e. glEnable(GL_LIGHTING)), instead of you specifying these ops in your custom shaders.
In the new, programmable, OpenGL, transform matrices are just uniforms in the shader. Any rotate/translate/mult/inverse (like the above) should be done by client code (your code) before being uploaded to OpenGL. (Using only glLoadMatrix is one way to start thinking about it, but instead of using gl_ModelViewProjectionMatrix and the ilk in your shader, use your own uniforms.)
It's a bit of a bother, since you have to implement quite a bit of what was done by the GL driver before, but if you have your own object list/graph with transforms and a transform somewhere etc, it's not that much work. (OTOH, if you have a lot of glTranslate/glRotate in your code, it might be...). As I said, a good 3d-math library is indispensable here.
-..
So, to change the above code to "programmable pipeline" style, you'd just do all these matrix multiplications in your own code (instead of the GL driver doing it, still on the CPU) and then send the resulting matrix to opengl as a uniform before you activate the shaders and draw your object from VBOs.
(Note that modern cards do not have fixed-function code, just a lot of code in the driver to compile fixed-function rendering state to a shader that does the job. No wonder "classic" GL drivers are huge...)
...
Some info about this process is available at Tom's Hardware Guide and probably Google too.
I've had this problem on a couple of machines now - almost always laptops and I think usually those with Intel graphics chipsets, when using ID3DXLine.
I have some code that vaguely looks like this:
MyLine->SetWidth(MyLineThickness);
MyLine->SetPattern(MyLinePattern);
MyLine->Begin();
{
... Draw some lines with ->MyLine->Draw
}
MyLine->End();
With MyLine being a CComPtr (ID3DXLine). When MyLineThickness is 1.0, these machines draw thick lines (looking as if they're drawn with a felt-tip pen!). When I change MyLineThickness to 1.1, or 1.5, I then get nice thin lines. Obviously increasing that to around 8.f will give me thick lines again.
So ID3DXLine on these machines seems to do something really odd when thickness is 1.0. At < 1.f and > 1.f it seems to behave as you would expect!
Has anyone else experienced any strangeness in ID3DXLine? I'm using D3D 9.0c btw, alongside the Feb 2010 SDK.
According to DX9 documentation lines of thickness 1.0f are drawn using native hardware line drawing support if one exists. All other sizes are drawn by producing a pair of triangles and, hence, rendered via vertex shader. Try checking D3DCAPS9::LineCaps for supported capabilities.
After some playing with ID3DXLine I decided to use DrawPrimitives for drawing lines - it's probably a bit slower but at least you get same result on any system.