I am playing around with a compute shader and have a weird behaviour I really cannot explain. I was wondering if you could help me figure it out.
This is a suboptimal use of a compute shader, but I want to understand what is happening...
I have a 64x64 texture and I want to write 4 horizontal pixels from a single invocation of my shader. Therefore I call vkCmdDispatch( 64 / 4, 1, 1 ); with the following shader
#version 450
layout (local_size_x = 1, local_size_y = 1) in;
layout (binding = 0, rgba8) uniform writeonly image2D resultImage;
void main()
{
const uint texPosX = gl_GlobalInvocationID.x * 4;
const uint texPosY = gl_GlobalInvocationID.y;
imageStore( resultImage, ivec2( texPosX + 0, texPosY ), vec4( 1.0f, 0.0f, 0.0f, 1.0f) );
imageStore( resultImage, ivec2( texPosX + 1, texPosY ), vec4( 0.0f, 1.0f, 0.0f, 1.0f) );
imageStore( resultImage, ivec2( texPosX + 2, texPosY ), vec4( 0.0f, 0.0f, 1.0f, 1.0f) );
imageStore( resultImage, ivec2( texPosX + 3, texPosY ), vec4( 1.0f, 1.0f, 1.0f, 1.0f) );
}
I would expect this to write 4 pixels (red green blue and white) in a single row. I would expect the final image to look like vertical lines with a pattern like { red, green, blue, white, red, green, blue, white, ...}
What I get is instead vertical lines with a pattern of { red, green, blue, white, green, blue, white, red, blue, etc... } so it looks like there is a shift by one in the values.
If I instead change the way I compute the texPosX to const uint texPosX = gl_GlobalInvocationID.x * 5; (writing 5 pixels instead of 4) I get the expected result. If I add a fifth write like so
imageStore( resultImage, ivec2( texPosX + 4, texPosY ), vec4( 1.0f, 1.0f, 1.0f, 1.0f) ); // purple color at fifth pixel
the result is exactly the same as previously, meaning the last write is "invisible". My image is really 64x64, but it seems the number of pixels each invocation is working on is 5, with the last one not being invisible... And as far as I know, 64 / 4 = 16 workgroups and 64 / 16 = 4 pixels per workgroup (since my local group size is 1).
I am surely missing something that is obvious, but I have no clue to what it is. I thought I understood the global/local workgroup size, but it does not seem to be the case...
Thanks a lot for your help !
Related
I'm studing Vulkan coordinate system stuff by working on a toy renderer.
I'm confused about coordinates of vertex positions.
Online Vulkan info, such as this:
https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/
...mention that +X is right, +Y is down, and +Z is back.
In my renderer, +Z is pointing forward and I can't figure out why.
I have a triangle defined like this:
// CCW is facing forward
std::vector<PosColorVertex> vertexBuffer = {
{{ 0.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}},
{{-1.0f, 1.0f, 0.0f}, {0.0f, 1.0f, 0.0f}},
{{ 1.0f, 1.0f, -5.0f}, {0.0f, 0.0f, 1.0f}},
};
That -5(Z) moves the vertex back, into the screen. It should be +5 that does that.
The coordinate system seems to be like this:
If I place the camera at the origin, it looks like this:
Another shot, with the camera away from the triangle (view translated by -4 on Z).
Some relevant code. Both model & view matrix are identity.
VS:
outColor = inColor;
gl_Position = ubo.projectionMatrix * ubo.viewMatrix * ubo.modelMatrix * vec4(inPos.xyz, 1.0);
FS:
outFragColor = vec4(inColor, 1.0);
Projection is:
glm::perspective(glm::radians(60.0f), w/h, 0.1, 256.0);
Clip, and normalized device coordinates
Clip coordinates are those we get from the vertex shader. Normalized device coordinates (NDC) are the same, but divided by w. There are two common user options (left-handed, and right-handed):
What "up" means is actually up to you. But if you want it to be compatible with virtually all presentation engines, you want "up" to mean -y after the viewport transform (so in NDH your "up" should be -y in case of vanilla viewport transform, or it should be +y in case viewport transform later flips it).
The choice of "up" being always -y in framebuffer\image coordinates is because surface coordinates on virtually all presentation engines assume upper-left origin:
Many image file formats also assume the same.
World coordinates
World coordinates are the ones that are perfectly up to you. And via the vertex shader you transform the world coordinates into the clip coordinates Vulkan can process.
You did this via your glm::perspective.
Let's first actually see what we have:
std::vector<PosColorVertex> vertexBuffer = {
{{ 0.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}},
{{-1.0f, 1.0f, 0.0f}, {0.0f, 1.0f, 0.0f}},
{{ 1.0f, 1.0f, -5.0f}, {0.0f, 0.0f, 1.0f}},
};
Now, this is again highly up to interpretation what this actually means. We need another "up" reference direction.
To stay sane, we would perhaps like "up" to be in the increasing direction of y. So that means we got some red corner at the bottom. We got one green corner at upper left. And we got one blue corner at upper right. Or so I assume was the author's intent.
Additionally to stay sane we would prefer right-handed coordinate system. So, if we have chosen that +y means "up" and +x means "right", then -z gotta be "front" (and +z gotta be "back"):
(This is something that matches e.g. how Blender have world coordinates.) Now we got ourselves in bit of a pickle though. Our z is negative instead of positive as required by NDH. And our y points the other way than in NDH compared to "up". Whatever transform we do, we need to make these match.
What does glm::perspective() do
glm::perspective() primarily does make it look, well perspectivy. But for that it needs to do couple of assumptions.
The easy one is depth. In Vulkan in NDC it is zero to one. Conveniently there is GLM_FORCE_DEPTH_ZERO_TO_ONE. That instucts perspective() that it should map your near and far to 0 and 1, respectively. (The defualt is -1 to 1, which would not work in Vulkan unless manually corrected.) The x and y are still always -1 to 1.
Second choice is handedness. Right-handed is default. Left-handed needs GLM_FORCE_LEFT_HANDED. Or it can be used explicitly for the single function, e.g. perspectiveRH(). It is slightly misleading. What this actually means here is that "right-handed" implies "front" being -z. And "left-handed" means the projection assumes "front" is into positive direction of z:
Third choice is what is "up". glm::perspective() does not do anything with this actually, and the polarity of y stays the same througout the transform. If we want +y to mean "up", we need to do this manually. Either we can make use of the viewport flip feature, or we can bake it into the view-projection matrix: proj[1][1] = -proj[1][1].
How to test this stuff
It is actually pretty straightforwar to test this. This code can be used for the purpose:
#include <cmath>
#include <iostream>
#define GLM_FORCE_RADIANS
#define GLM_FORCE_DEPTH_ZERO_TO_ONE
//#define GLM_FORCE_LEFT_HANDED
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
int main(){
#ifdef GLM_FORCE_LEFT_HANDED
const float near = 1.0f;
const float far = 2.0f;
#else //right-handed
const float near = -1.0f;
const float far = -2.0f;
#endif
glm::vec3 right { 1.0f, 0.0f, near};
glm::vec3 left {-1.0f, 0.0f, near};
glm::vec3 up { 0.0f, 1.0f, near};
glm::vec3 down { 0.0f, -1.0f, near};
glm::vec3 front { 0.0f, 0.0f, (far-near) + near };
glm::vec3 back { 0.0f, 0.0f, (near-far) + near };
const auto xform = glm::perspective(glm::radians(60.0f), 1.0f, std::abs(near), std::abs(far));
auto r = xform * glm::vec4(right , 1.0f);
auto l = xform * glm::vec4(left , 1.0f);
auto u = xform * glm::vec4(up , 1.0f);
auto d = xform * glm::vec4(down , 1.0f);
auto f = xform * glm::vec4(front , 1.0f);
auto b = xform * glm::vec4(back , 1.0f);
std::cout << "Right to clip: (" << r.x << ", " << r.y << ", " << r.z << ", " << r.w << ")\n";
std::cout << "Left to clip: (" << l.x << ", " << l.y << ", " << l.z << ", " << l.w << ")\n";
std::cout << "Up to clip: (" << u.x << ", " << u.y << ", " << u.z << ", " << u.w << ")\n";
std::cout << "Down to clip: (" << d.x << ", " << d.y << ", " << d.z << ", " << d.w << ")\n";
std::cout << "Front to clip: (" << f.x << ", " << f.y << ", " << f.z << ", " << f.w << ")\n";
std::cout << "Back to clip: (" << b.x << ", " << b.y << ", " << b.z << ", " << b.w << ")\n";
}
For both handedness settings, we get:
Right to clip: (1.73205, 0, 0, 1)
Left to clip: (-1.73205, 0, 0, 1)
Up to clip: (0, 1.73205, 0, 1)
Down to clip: (0, -1.73205, 0, 1)
Front to clip: (0, 0, 2, 2)
Back to clip: (0, 0, -2, 0)
Ups, it all gets clipped away. But anyway "right" and "up" is positive number. So yea, we might want to flip y some way to be compatible with presentation engine coordinates. "front" is (0,0,1) direction, and "back" stops existing.
Note what changes in the code is the direction of z as used in the world coordinates.
Is it possible to display non rectangular items in an app?
The top right edge of each element is clipped:
I turned off clipping on the canvas element and set the clipping
region of the context. I even allowed for the stroke drawing outside
the path. Here's what I'm using to draw it:
Canvas
{
//id: root
// canvas size
height: parent.height - 8
width: height
anchors.top: parent.top + 4
clip: false
z: index + 1
// handler to override for drawing
onPaint:
{
// get context to draw with
var ctx = getContext("2d")
ctx.reset();
// path that includes 1 pixel margin on all sides
ctx.beginPath()
ctx.moveTo( 8, 0 )
ctx.lineTo( width + 4, 0 )
ctx.lineTo( width - 4, height )
ctx.lineTo( 0, height )
ctx.closePath()
ctx.clip();
// setup the stroke
ctx.lineWidth = 2
ctx.strokeStyle = "white"
ctx.beginPath()
ctx.moveTo( 9, 1 )
ctx.lineTo( 9 + width, 1 )
ctx.lineTo( 1 + width, height - 1 )
ctx.lineTo( 1, height - 1 )
ctx.closePath()
ctx.fillStyle = (roleStatus.toLowerCase().indexOf("success")!==-1) ? "green" : "red"
ctx.fill()
ctx.stroke()
}
}
This will be used on Windows and android.
Thanks
Yes... You can use PaintedItem to paint directly on items using Native Paint tools from C++ like QPainterPath
check out http://doc.qt.io/qt-5/qtquick-customitems-painteditem-example.html
the reason that your canvas is clipping is due to the fact that you are drawing width + 4 which should be (width - 8), but since you move to (8,0) first, then you end up drawing an extra 4 pixels too far. try either moving the item over 4 pixels by doing moveTo(4,0) or make the line shorter by doing just width instead of width + 4
Also check out : anchors.fill: parent which will work better in your case most likely.
The way that I avoid crazy bugs like this is by not ever hard coding width, height, x or y into my application.. instead use percentages such as
(parent.width * 0.25) to get 1/4 of the parent
Here's ONE way you could fix your code...
Canvas
{
//id: root
// canvas size
height: parent.height * 0.95
width: height
anchors.top: parent.top
clip: false
z: index + 1
// handler to override for drawing
onPaint:
{
// get context to draw with
var ctx = getContext("2d")
ctx.reset();
// path that includes 1 pixel margin on all sides
ctx.beginPath()
ctx.moveTo( width * 0.1, 0 )
ctx.lineTo( width * 0.9, 0 )
ctx.lineTo( width * 0.7, height )
ctx.lineTo( 0, height )
ctx.closePath()
ctx.clip();
/* etc etc */
}
}
I was unable to find a way to draw outside the bounds of the item. I was able to achieve the effect I wanted though. I drew the polygon within the bounds of the item and set the 'spacing' property of the ListView to a negative value. This overlaps the drawn items to achieve the desired look:
I have a png file with different sprites , in opengl 1. I have selected the picture with:
// the png dimensions are 512x512
gl.glMatrixMode(GL10.GL_TEXTURE);
// x and y are the coordinates of the selected drawing
gl.glTranslatef(x/512f, y/512f, 0);
// w and h are the width and
height of the selected drawing
gl.glScalef(w/512f, h/512f, 0);
I have no idea in opengl2 , i read this tutorial:
http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/
it is not difficult ,but you can only change the values of w and h (equivalent of
gl.glScalef(w/512f, h/512f, 0);
)
Do any other tutorial or solution?
So a tutorial you've read is what you need. Read previous tutorials from that website. The main difference in gles2 from gles1 is that all drawing happens inside shaders(fragment and vertex). Here is a part of texture bounding from my code and fragment shader source.
GLuint textureId;
// Generate a texture object
glGenTextures ( 1, textureId );
// Bind the texture object
glBindTexture ( GL_TEXTURE_2D, textureId );
/ Load the texture
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, where_you_store_unpacked_texture_data );
// Set the filtering mode
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
// Bind the texture
glActiveTexture ( GL_TEXTURE0 );
glBindTexture ( GL_TEXTURE_2D, textureId );
Then after you have bound a texture, you can pass its id into fragment shader.
Fragment shader is something like this:
const char* pszFragShader_text = "\
precision mediump float;\
\
varying vec3 v_texCoord_text;\
uniform sampler2D s_texture_text;\
void main (void)\
{\
gl_FragColor = texture2D( s_texture_text, v_texCoord_text.xy );\
}";
I'm trying to draw a sphere and calculate its surface normals. I've been staring at this for hours, but I'm getting nowhere. Here is a screenshot of the mess that this draws:
- (id) init
{
if (self = [super init]) {
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
GLfloat rad_th, rad_ph;
GLint th, ph;
GLint i = 0;
GLKMatrix3 this_triangle;
GLKVector3 column0, column1, column2, this_normal;
for (ph=-90; ph<=90; ph++) {
for (th=0; th<=360; th+=10) {
if (i<3) printf("i: %d th: %f ph: %f\n", i, (float)th, (float)ph);
rad_th = GLKMathDegreesToRadians( (float) th );
rad_ph = GLKMathDegreesToRadians( (float) ph);
_vertices[i][0][0] = sinf(rad_th)*cosf(rad_ph);
_vertices[i][0][1] = sinf(rad_ph);
_vertices[i][0][2] = cos(rad_th)*cos(rad_ph);
rad_th = GLKMathDegreesToRadians( (float) (th) );
rad_ph = GLKMathDegreesToRadians( (float) (ph+1) );
_vertices[i+1][0][0] = sinf(rad_th)*cosf(rad_ph);
_vertices[i+1][0][1] = sinf(rad_ph);
_vertices[i+1][0][2] = cos(rad_th)*cos(rad_ph);
i+=2;
}
}
// calclate and store the surface normal for every triangle
i=2;
for (ph=-90; ph<=90; ph++) {
for (th=2; th<=360; th++) {
// note that the first two vertices are irrelevant since it isn't until the third vertex that a triangle is defined.
column0 = GLKVector3Make(_vertices[i-2][0][0], _vertices[i-2][0][1], _vertices[i-2][0][2]);
column1 = GLKVector3Make(_vertices[i-1][0][0], _vertices[i-1][0][1], _vertices[i-1][0][2]);
column2 = GLKVector3Make(_vertices[i-0][0][0], _vertices[i-0][0][1], _vertices[i-0][0][2]);
this_triangle = GLKMatrix3MakeWithColumns(column0, column1, column2);
this_normal = [self calculateTriangleSurfaceNormal : this_triangle];
_vertices[i][1][0] = this_normal.x;
_vertices[i][1][1] = this_normal.y;
_vertices[i][1][2] = this_normal.z;
i++;
}
i+=2;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(_vertices), _vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*6, NULL);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*6, (GLubyte*)(sizeof(GLfloat)*3));
glBindVertexArrayOES(0);
}
return self;
}
(void) render;
{
glDrawArrays(GL_TRIANGLE_STRIP, 0, 65522);
}
Here is my surface normal calculation. I've used this elsewhere, so I believe that it works, if given the correct vertices, of course.
- (GLKVector3) calculateTriangleSurfaceNormal : (GLKMatrix3) triangle_vertices
{
GLKVector3 surfaceNormal;
GLKVector3 col0 = GLKMatrix3GetColumn(triangle_vertices, 0);
GLKVector3 col1 = GLKMatrix3GetColumn(triangle_vertices, 1);
GLKVector3 col2 = GLKMatrix3GetColumn(triangle_vertices, 2);
GLKVector3 vec1 = GLKVector3Subtract(col1, col0);
GLKVector3 vec2 = GLKVector3Subtract(col2, col0);
surfaceNormal.x = vec1.y * vec2.z - vec2.y * vec1.z;
surfaceNormal.y = vec1.z * vec2.x - vec2.z * vec1.x;
surfaceNormal.z = vec1.x * vec2.y - vec2.x * vec1.y;
return GLKVector3Normalize(surfaceNormal);
}
In my .h file, I define the _vertices array like this (laugh if you will...):
// 360 + 2 = 362 vertices per triangle strip
// 90 strips per hemisphere (one hemisphere has 91)
// 2 hemispheres
// 362 * 90.5 * 2 = 65522
GLfloat _vertices[65522][2][3]; //2 sets (vertex, normal) and 3 vertices in each set
It appears you are calculating normals for the triangles in your triangle strip, but assigning these normals to the vertexes which are shared by multiple triangles. If you just used triangles instead of the triangle strip, and gave all three vertexes from each triangle that triangles normal, each triangle would have an appropriate normal. This value would really only be correct for the center of the triangle. You would be better off using vertex normals, which as Christian mentioned are equal to the vertexes in this case. These could be interpolated across the triangles.
I have this method that prepares the coordinates in the posCoords array. It works properly about 30% of the time, then the other 70% the first few triangles are messed up in the grid.
The entire grid is drawn using GL_TRIANGLE_STRIP.
I'm pulling my hair out trying to figure out whats wrong. Any ideas?
if(!ES2) {
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
}
int cols = floor(SCREEN_WIDTH/blockSize);
int rows = floor(SCREEN_HEIGHT/blockSize);
int cells = cols*rows;
NSLog(#"Cells: %i", cells);
coordCount = /*Points per coordinate*/2 * /*Coordinates per cell*/ 2 * cells + /* additional coord per row */2*2*rows;
NSLog(#"Coord count: %i", coordCount);
if(texCoords) free(texCoords);
if(posCoords) free(posCoords);
if(dposCoords) free(dposCoords);
texCoords = malloc(sizeof(GLfloat)*coordCount);
posCoords = malloc(sizeof(GLfloat)*coordCount);
dposCoords = malloc(sizeof(GLfloat)*coordCount);
int index = 0;
float lowY, hiY = 0;
int x,y = 0;
BOOL drawLeftToRight = YES;
for(y=0;y<SCREEN_HEIGHT;y+=blockSize) {
lowY = y;
hiY = y + blockSize;
// Draw a single row
for(x=0;x<=SCREEN_WIDTH;x+=blockSize) {
CGFloat px,py,px2,py2 = 0;
// Top point of triangle
if(drawLeftToRight) {
px = x;
py = lowY;
// Bottom point of triangle
px2 = x;
py2 = hiY;
}
else {
px = SCREEN_WIDTH-x;
py = lowY;
// Bottom point of triangle
px2 = SCREEN_WIDTH-x;
py2 = hiY;
}
// Top point of triangle
posCoords[index] = px;
posCoords[index+1] = py;
// Bottom point of triangle
posCoords[index+2] = px2;
posCoords[index+3] = py2;
texCoords[index] = px/SCREEN_WIDTH;
texCoords[index+1] = py/SCREEN_HEIGHT;
texCoords[index+2] = px2/SCREEN_WIDTH;
texCoords[index+3] = py2/SCREEN_HEIGHT;
index+=4;
}
drawLeftToRight = !drawLeftToRight;
}
With a triangle strip the last vertex you add replaces the the oldest vertex used so you're using bad vertices along the edge. It's easier to explain with your drawing.
Triangle 1 uses vertices 1, 2, 3 - valid triangle
Triangle 2 uses vertices 2, 3, 4 - valid triangle
Triangle 3 uses vertices 4, 5, 6 - valid triangle
Triangle 4 uses vertices 5, 6, 7 - straight line, nothing will be drawn
Triangle 5 uses vertices 6, 7, 8 - valid
etc.
If you want your strips to work, you'll need to pad your strips with degenerate triangles or break your strips up.
I tend to draw left to right and at the end of the row add a degenerate triangle, then left to right again.
e.g. [1, 2, 3, 4, 5, 6; 6 10; 10, 11, 8, 9, 6, 7]
The middle part is called a degenerate triangle (e.g. triangles of zero area).
Also, if I had to take a guess at why you are seeing various kinds of corruption, I'd check to make sure that your vertices and indices are exactly what you expect them to be - normally you see that kind of corruption when you don't specify indices correctly.
Found the issue: the texture buffer was overflowing in to the vertex buffer, it was random because some background tasks where shuffling memory around on a timer (sometimes)