How do I create a Gradient Background in Vulkan? - background

I have been tinkering with the Vulkan Triangle demo. I am trying to get a gradient background instead of a solid color. In OpenGL this is easily done by rendering a quad and keeping background pixels. When I do this in Vulkan, the quad covers up the triangle. Anyone have an idea how to do this? I tried separate render passes but don't know how to keep the pixels from the first renderpass (background) Any help or sample code would be appreciated. Thanks,
Tony

there is no need for separate passes, you can render the gradient quad as is before the hello triangle.
If you have a separate shader for it, then you can bind it then. If the data is in a separate VBO then you bind it as needed.

There's no need for a separate render pass if you just want a full screen quad to display a gradient. Actually it's pretty much the same as you'd do in OpenGL. Render a triangle or quad that covers the whole screen with the gradient, and then draw the triangle. If your quad is hiding the triangle even though the triangle is closer to the viewer (on the z-axis), you may have depth testing disabled.

OK, I got the depth buffer clearing working correctly. Here is the code snippet. Put this command inbetween the render of the background and the triangle.
VkClearDepthStencilValue val;
val.depth = 1.0f;
val.stencil = 0;
VkImageSubresourceRange range;
range.layerCount = 1;
range.levelCount = 1;
range.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
range.baseArrayLayer = 0;
range.baseMipLevel = 0;
vkCmdClearDepthStencilImage(m_drawCmdBuffers[i], m_depthStencil.image, VK_IMAGE_LAYOUT_GENERAL, &val, 1, &range);

Related

Parallax effect jitter in libGDX

Basically I'm using this ParallaxCamera class to create the effect in my game, but upon movement, the layers "wiggle". This is especially noticeable when the camera is moving slowly.
I have fixed the timestep and use interpolated smoothing. I use scaled up pixel art. Camera centered on player, updated after movement. Disabling the effect makes moving the camera smooth.
What I guess the issues might be:
the layers move at different paces which means they move at different
times
rounding to display makes the layers assume slightly different positions each frame when moving the camera
Thanks for any help
For low-resolution pixel art, this is the strategy I've used. I draw to a small FrameBuffer at 1:1 resolution and then draw that to the screen. That should take care of jittering.
If your Stage is also at the same resolution, then you have to use a bit of a hack to get input to be processed properly. The one I've used it to use a StretchViewport, but I manually calculate the world width and height to not stretch the world, so I'm basically doing the same calculation that ExtendViewport does behind the scenes. You also have to round to an integer for the width and height. You should do this in resize and apply the width and height using viewport.setWorldWidth() and setWorldHeight(). So in this case it doesn't matter what world size you give to the constructor since it will be changed in update().
When you call update on the viewport in resize, you need to do it within the context of the FrameBuffer you are drawing to. Otherwise it will mess up the screen's frame buffer dimensions.
public void resize(int width, int height) {
int worldWidth = Math.round((float)WORLD_HEIGHT / (float)height * (float)width);
viewport.setWorldWidth(worldWidth);
viewport.setWorldHeight(worldHeight);
frameBuffer.begin();
viewport.update(width, height, true); // the actual screen dimensions
frameBuffer.end();
}
You can look up examples of using FrameBuffer in LibGDX. You should do all your game drawing in between frameBuffer.begin() and end(), and then draw the frameBuffer's color buffer to the screen like this:
stage.act();
frameBuffer.begin();
//Draw game
stage.draw();
frameBuffer.end();
spriteBatch.setProjectionMatrix(spriteBatch.getProjectionMatrix().idt());
spriteBatch.begin();
spriteBatch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2);
spriteBatch.end();
In my case, I do a more complicated calculation of world width and world height such that they are a whole number factor of the actual screen dimensions. This prevents the big pixels from being different sizes on the screen, which might look bad. Alternatively, you can change the filtering of the FrameBuffer's texture to linear and use an upscaling shader when drawing it.

Metal multisampling results in darkened textures

So I'm trying to implement full-screen MSAA in my Metal app. I have it working and when drawing solid-filled polygons the edges appear smooth as expected. However, my textured polygons appear dark, and get darker as I increase the number of samples, indicating that the shader might be taking only one sample of the texture per fragment and blending it with n - 1 samples of black therefore making it darker.
However, in my app I also have textures that I render to and then draw to the screen. These textures show up perfectly fine. I can't really see a difference between the two kinds of textures that would change the behavior of multisampling.
Anyway, if anyone could maybe give me any clues as to what's going on, I would greatly appreciate it. I'm pretty stumped on this one.
EDIT:
Here is how I am setting up all my pipeline state(s)
Here is how the texture pipeline state is set up specifically
I figured it out. The problem was that I hadn't set my stencil draw pipeline state to be multisampled. Therefore it was only reading the value in the stencil buffer for 1 out of n samples and hence darkening the output. Works fine now.

Are point sprites always perfect circles/squares?

I noticed that regardless of the shape (aspect ratio) of a texture, it will always draw as a perfect square, scaling unequally, when using it as a point sprite. I assume this is because points are, after all, circular.
If you wish to use point sprites on rectangular textures, is this possible using the point sprite mechanism, or would I need to just build quads with textures instead?
Or perhaps there is something that can be added to a shader to recognize and work with a rectangular texture? Currently mine are quite simple:
Vertex shader:
TextureCoordOut = TextureCoordinate;
gl_PointSize = 15.0;
Fragment:
gl_FragColor = texture2D(Sampler, isSprite? gl_PointCoord: TextureCoordOut) * DestinationColor;
Points have only one size, which will be equally applied to the width and height..

How to transition an isometric object from one tile to another without overlapping other tiles

I've created an isometric environment, all in Javascript and HTML5 (2D Canvas), which mostly works fine. The problem I'm facing is to do with having different height tiles and then sorting the indexes of objects on the tiles (in this case, while moving between two tiles side-by-side).
For example, one object may be behind a tile in front of it because the height of the tile it is on is -1. The solution I came up with was to draw each object of a tile directly after drawing the tile, starting at 0,0 and drawing each row and column from there.
This works well until I need to transition an object between two tiles. At this point, either the object must use an intermediate tile (this is what is implemented in the images below) or the tile will overlap the object as the tile is drawn after the object. Using an intermediate tile also gives a problem where the fence object on the same "row" gets draw over because the cube is using much higher z-index from the tile at 1,3 (this is slightly visible in image 1).
http://i.stack.imgur.com/PQJ0H.png
http://i.stack.imgur.com/DupM7.png
I think the tried and tested way of drawing isometric environments is just to have 1 layer for tiles and 1 layer for objects and then objects can never be behind tiles, but this is just a limitation that I don't want to adhere to.
So my question is, when drawing the entire environment from top to bottom (or any other way if it makes it possible), drawing each tile and it's objects in turn, is there a clever way to defer drawing of an object or create an array of objects to be drawn in the correct order? Has anyone else encountered similar issues and has anyone else found any solutions for this?
All help much appreciated.
An example of my tiling code:
// each column
for(y=0; y<totalColumns; y++){
// each row
for(x=0; x<totalRows; x++){
tile = tiles[y][x];
// draw tile
drawTile(tile);
objects = objects[y][x];
// draw objects for that tile
drawObjects(objects);
}
}
Edit:
One solution I have thought of (after reading the question back to myself) is to loop through all the tiles, get an array of tile heights, sort that, then do the traditional drawing. Like so:
var layers = [];
for(var y=0; y<cols; y++){
for(var x=0; x<rows; x++){
tile = tiles[y][x];
if(!layers.indexOf(tile.height)) layers.push(tile.height);
}
}
// sort layers
layers.sort(/*function here to sort layers*/);
for(l=0; l<layers.length; l++){
// draw tiles for this layer
// draw objects for this layer
}
Any other solutions possible?
Your doing the exact right thing in your solution. You want to do this for everything that doesn't have an alpha.
As far as other solutions go I would recommend looking at using an insertion sort rather than rebuilding the list time and time again. In your case there is very little change so just update the ones that need it.

if vertex array count > 1000, glDrawArrays becomes slow?

I have painting app. Mouse event coordinates are stored to VertexArray. Then vertex array is being drawn to screen. My code structure looks like this
// I get mouse event coordinates and store them to VertexArray
glPushMatrix();
//some new matrix settings
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
glClear(GL_COLOR_BUFFER_BIT);
//now I draw first full size textured quad and later I draw vertexArray
glDrawArrays(.....);
//and now I draw second full size textured quad on top of first quad ant that what have been drawn from vertex array
glPopMatrix();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//immediately after that I draw FBO to screen:
glBindTexture(GL_TEXTURE_2D, fbTexture);
//Code for drawing textured quad
glBindTexture(GL_TEXTURE_2D, 0);
So everything is redrawn every time when new mouse event coordinate is being registered. And if there are more than 1000 coordinates, drawing becomes really slow. Where could be my problem? I thing 1000 vertices for OpenGL is not much
It's not the number of vertices; it's how you're sending them.
First, you never defined "really slow"; often times people will mistakenly think that a change from 400fps to 300fps is "slow". It's not. It only represents a render time increase from 2.5ms-per-frame to 3.3ms, a change of less than a single millisecond. Non-trivial, but probably not something to be too concerned over.
It's always important to measure performance in terms of render time, not FPS.
That being said, your main problem is that you're drawing a single quad at a time. Each one coming from a separate glDrawArrays command. That's not necessarily a good thing, especially if you change state between drawing commands (like binding a texture and so forth).
If you're doing that, then you need to find ways to avoid doing that. What you want to do is render a lot of quads all with one draw calls. This means you have to use the same texture for all of them.
The common solution to this problem is to make a larger texture that has multiple images in different locations. This is commonly called a "texture atlas" (Google that for the details). Each quad would have texture coordinates for the particular image it renders. Text is often drawn in such a way, where each letter (glyph) is stored in the same texture.