I'm able to render a point in vulkan by specifying VK_PRIMITIVE_TOPOLOGY_POINT_LIST in the graphics pipeline. But the resulting point is a small square.
How can I draw a round point in Vulkan? Is there an equivalent to GL_SMOOTH_POINT?
As noted in the comments, there is no such functionality built-into Vulkan.
But you can easily emulate this in your point list rendering fragment shader by discarding fragments outside of a given circular radius like this:
const float radius = 0.25;
if (length(gl_PointCoord - vec2(0.5)) > radius) {
discard;
}
Related
Basically I'm using this ParallaxCamera class to create the effect in my game, but upon movement, the layers "wiggle". This is especially noticeable when the camera is moving slowly.
I have fixed the timestep and use interpolated smoothing. I use scaled up pixel art. Camera centered on player, updated after movement. Disabling the effect makes moving the camera smooth.
What I guess the issues might be:
the layers move at different paces which means they move at different
times
rounding to display makes the layers assume slightly different positions each frame when moving the camera
Thanks for any help
For low-resolution pixel art, this is the strategy I've used. I draw to a small FrameBuffer at 1:1 resolution and then draw that to the screen. That should take care of jittering.
If your Stage is also at the same resolution, then you have to use a bit of a hack to get input to be processed properly. The one I've used it to use a StretchViewport, but I manually calculate the world width and height to not stretch the world, so I'm basically doing the same calculation that ExtendViewport does behind the scenes. You also have to round to an integer for the width and height. You should do this in resize and apply the width and height using viewport.setWorldWidth() and setWorldHeight(). So in this case it doesn't matter what world size you give to the constructor since it will be changed in update().
When you call update on the viewport in resize, you need to do it within the context of the FrameBuffer you are drawing to. Otherwise it will mess up the screen's frame buffer dimensions.
public void resize(int width, int height) {
int worldWidth = Math.round((float)WORLD_HEIGHT / (float)height * (float)width);
viewport.setWorldWidth(worldWidth);
viewport.setWorldHeight(worldHeight);
frameBuffer.begin();
viewport.update(width, height, true); // the actual screen dimensions
frameBuffer.end();
}
You can look up examples of using FrameBuffer in LibGDX. You should do all your game drawing in between frameBuffer.begin() and end(), and then draw the frameBuffer's color buffer to the screen like this:
stage.act();
frameBuffer.begin();
//Draw game
stage.draw();
frameBuffer.end();
spriteBatch.setProjectionMatrix(spriteBatch.getProjectionMatrix().idt());
spriteBatch.begin();
spriteBatch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2);
spriteBatch.end();
In my case, I do a more complicated calculation of world width and world height such that they are a whole number factor of the actual screen dimensions. This prevents the big pixels from being different sizes on the screen, which might look bad. Alternatively, you can change the filtering of the FrameBuffer's texture to linear and use an upscaling shader when drawing it.
I have been tinkering with the Vulkan Triangle demo. I am trying to get a gradient background instead of a solid color. In OpenGL this is easily done by rendering a quad and keeping background pixels. When I do this in Vulkan, the quad covers up the triangle. Anyone have an idea how to do this? I tried separate render passes but don't know how to keep the pixels from the first renderpass (background) Any help or sample code would be appreciated. Thanks,
Tony
there is no need for separate passes, you can render the gradient quad as is before the hello triangle.
If you have a separate shader for it, then you can bind it then. If the data is in a separate VBO then you bind it as needed.
There's no need for a separate render pass if you just want a full screen quad to display a gradient. Actually it's pretty much the same as you'd do in OpenGL. Render a triangle or quad that covers the whole screen with the gradient, and then draw the triangle. If your quad is hiding the triangle even though the triangle is closer to the viewer (on the z-axis), you may have depth testing disabled.
OK, I got the depth buffer clearing working correctly. Here is the code snippet. Put this command inbetween the render of the background and the triangle.
VkClearDepthStencilValue val;
val.depth = 1.0f;
val.stencil = 0;
VkImageSubresourceRange range;
range.layerCount = 1;
range.levelCount = 1;
range.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
range.baseArrayLayer = 0;
range.baseMipLevel = 0;
vkCmdClearDepthStencilImage(m_drawCmdBuffers[i], m_depthStencil.image, VK_IMAGE_LAYOUT_GENERAL, &val, 1, &range);
I don't know how to draw two independent vector graphics and apply tranformation on one of them.
My code:
doc.moveTo(0, 20)
.lineTo(200, 20)
.rotate(45)
.stroke();
doc.moveTo(0, 40)
.lineTo(200, 40)
.stroke();
All I want is, to have first drawing rotated and the second not. But both are rotated and I cannot find how to transform (rotate, scale) only one of them. Can anybody help, please?
Try using the graphics stack save() and restore() methods:
doc.save()
doc.moveTo(0, 20).lineTo(200, 20).rotate(45).stroke()
doc.restore()
doc.moveTo(0, 40).lineTo(200, 40).stroke()
I think methods like rotate() apply to the document, not just the line (this this case), so you can save the graphics stack, make changes, then restore the graphics stack to what it was before.
From : PDFKit - Transformations
The rotate transformation takes an angle and optionally, an object with an origin property. It rotates the document angle degrees around the passed origin or by default, the center of the page.
See: PDFKit - Saving and restoring the graphics stack
Here's the task:
We have an Mesh, drawn in position POS with rotation ROT
Also we have a camera Which position and rotation is relative to Mesh For example camera point is CPOS and camera rotation is CROT.
How to calculate resulting angle for camera? I was assuming that it something like:
camera.rotation.x = mesh.rotation.x + viewport.rotation.x
camera.rotation.y = mesh.rotation.y + viewport.rotation.y
camera.rotation.z = mesh.rotation.z + viewport.rotation.z
That worked strange and wrong.
Then I decided to read about it on docs and completely dissapointed.
There are several kind of rotation structures (Euler, Quaternion). But What a want is something different.
Imagine, like you are on spaceship. And it moves in space. You are sitting at starboard turret and looking at objects. They seems like passing by...
Then you want to turn your head - Angel of your head is known to you (in raw opengl, I'd just multiplied head rotation matrix on ship's rotation matrix and got my projection matrix).
In other words I want only x and y axis for camera rotations, combined in matrix. Then I want to multiply it with position-rotation matrix of an object. And this final matrix would be my projection matrix.
How could I do the same in THREE.js?
-----EDIT-----
Thank you for the answer.
Which coords should I give to a camera? It should be local, mesh relative coords, or something absolute?
I understand, that this questions are obvious, but there's no any description about relative objects in THREE.JS docs (besides api description). And the answer might be ambiguous.
Add the camera as a child of the mesh like so:
mesh.add( camera );
When the camera is a child of an object, the camera's position and orientation are specified relative to the parent object.
You can set the camera's orientation by setting either the camera's quaternion or Euler rotation -- your choice.
Please note that the renderer updates the object's matrix and matrixWorld for you. You do not need to do that manually.
three.js r.63
I noticed that regardless of the shape (aspect ratio) of a texture, it will always draw as a perfect square, scaling unequally, when using it as a point sprite. I assume this is because points are, after all, circular.
If you wish to use point sprites on rectangular textures, is this possible using the point sprite mechanism, or would I need to just build quads with textures instead?
Or perhaps there is something that can be added to a shader to recognize and work with a rectangular texture? Currently mine are quite simple:
Vertex shader:
TextureCoordOut = TextureCoordinate;
gl_PointSize = 15.0;
Fragment:
gl_FragColor = texture2D(Sampler, isSprite? gl_PointCoord: TextureCoordOut) * DestinationColor;
Points have only one size, which will be equally applied to the width and height..