I'm trying to draw scenes on top of each other, my way of doing so in OpenGL was to draw each scene and then clear the depth stencil and then draw the next scene.
For Vulkan world, if we want to translate my previous way, I have to start a render-pass and then draw each scene and then clear the depth-stencil (vkCmdClearAttachments) and then draw the next scene and end the render-pass. But several questions come in my mind:
Is there any better strategy for it? (e.g. starting new-render pass for each scene.)
I found the vkCmdClearAttachments but I'm not sure that it needs synchronization with previous and next commands or not?
The "optimal" way to do this, whether in Vulkan or OpenGL, is to partition your depth buffer into two regions: one for each scene. You use the viewport transform for partitioning the depth buffer. One scene would get the [0.5, 1] range, while the next would get the [0, 0.5] range. glDepthRange is what you would use to set this in OpenGL, and the min/maxDepth fields of VkViewport handle the similar transform in Vulkan.
If for some reason you can't partition your depth buffer (perhaps because both scenes need the extra precision of small floating-point values up-close), then which alternative is more optimal will almost certainly depend on the underlying hardware. On a tile-based renderer, I would imagine that vkCmdClearAttachments is not good news as far as performance is concerned. But for a traditional renderer, such a clear call is hardly a performance problem, and the cost of having to begin a new renderpass would be the main thing you would use.
Related
I'm trying to use WebGL to speed up computations in a simulation of a small quantum circuit, like what the Quantum Computing Playground does. The problem I'm running into is that readPixels takes ~10ms, but I want to call it several times per frame while animating in order to get information out of gpu-land and into javascript-land.
As an example, here's my exact use case. The following circuit animation was created by computing things about the state between each column of gates, in order to show the inline-with-the-wire probability-of-being-on graphing:
The way I'm computing those things now, I'd need to call readPixels eight times for the above circuit (once after each column of gates). This is waaaaay too slow at the moment, easily taking 50ms when I profile it (bleh).
What are some tricks for speeding up readPixels in this kind of use case?
Are there configuration options that significantly affect the speed of readPixels? (e.g. the pixel format, the size, not having a depth buffer)
Should I try to make the readPixel calls all happen at once, after all the render calls have been made (maybe allows some pipelining)?
Should I try to aggregate all the textures I'm reading into a single megatexture and sort things out after a single big read?
Should I be using a different method to get the information back out of the textures?
Should I be avoiding getting the information out at all, and doing all the layout and rendering gpu-side (urgh...)?
Should I try to make the readPixel calls all happen at once, after all the render calls have been made (maybe allows some pipelining)?
Yes, yes, yes. readPixels is fundamentally a blocking, pipeline-stalling operation, and it is always going to kill your performance wherever it happens, because it's sending a request for data to the GPU and then waiting for it to respond, which normal draw calls don't have to do.
Do readPixels as few times as you can (use a single combined buffer to read from). Do it as late as you can. Everything else hardly matters.
Should I be avoiding getting the information out at all, and doing all the layout and rendering gpu-side (urgh...)?
This will get you immensely better performance.
If your graphics are all like you show above, you shouldn't need to do any “layout” at all (which is good, because it'd be very awkward to implement) — everything but the text is some kind of color or boundary animation which could easily be done in a shader, and all the layout can be just a static vertex buffer (each vertex has attributes which point at which simulation-state-texel it should be depending on).
The text will be more tedious merely because you need to load all the digits into a texture to use as a spritesheet and do the lookups into that, but that's a standard technique. (Oh, and divide/modulo to get the digits.)
I don't know enough about your use case but just guessing, Why do you need to readPixels at all?
First, you don't need to draw text or your the static parts of your diagram in WebGL. Put another canvas or svg or img over the WebGL canvas, set the css so they overlap. Let the browser composite them. Then you don't have to do it.
Second, let's assume you have a texture that has your computed results in it. Can't you just then make some geometry that matches the places in your diagram that needs to have colors and use texture coords to look up the results from the correct places in the results texture? Then you don't need to call readPixels at all. That shader can use a ramp texture lookup or any other technique to convert the results to other colors to shade the animated parts of your diagram.
If you want to draw numbers based on the result you can use a technique like this so you'd make a shader at references the result shader to look at a result value and then indexes glyphs from another texture based on that.
Am I making any sense?
I'm creating a DirectX 11 game that renders complex meshes in 3D space. I'm using vertex/index buffers/shaders and this all works fine. However I now want to perform some basic 'overlay' rendering - more specifically, I want to render wireframe boxes in 3D space to show the bounds of a particular area. There would only ever be one or two boxes in view at any one time, and their vertices would change position each frame.
I've therefore been searching for simpler DX11 rendering methods but most articles I find still prepare a vertex/index buffer for very simple rendering. I know that hardware is well optimised for processing vertex streams, but is the overhead of building and filling a vertex buffer every frame just to process 8 vertices really the most efficient method?
My question is therefore, what is the most efficient method for performing this very simple rendering in DX11? Is there any more primitive method ("DrawLine", "DrawLineList(D3DXVECTOR3[])", ...) that would be a better solution? It could be less efficient per-vertex than the standard method of passing vertex buffers because it's only ever going to be used for a handful of vertices per frame.
Thanks in advance
Rob
You should create a single vertex / index buffer for each primitive Shape (box, sphere, ...) and use transformation matrix to place it correctly in the world.
I have SVG abirtrary paths which i need to pack as efficiently as possible within a given rectangle(as less waste of space as possible). After some research i found the bin packing algorithms which seems to be dealing with boxes and not curved random shapes(my SVG shapes are quite complex and include beziers etc.).
AFAIK, there is no deterministic algorithm for actually packing abstract shapes.
I wish to be proven wrong here which would be ideal(having a mathematical deterministic method for packing them). In case I am right however and there is not, what would be the best approach to this problem
The subject name is Shape Nesting, Nesting Problem or Nesting Process.
In Shape Nesting there is no single/uniform algorithm or mathematical method for nesting shapes and getting the least space waste possible.
The 1st method is the packing algorithm(creates an imaginary bounding
box for each shape and uses a rectangular 2D algorithm to pack the
bounding boxes).
This method is fast but the least efficient in regards to space
waste.
The 2nd method is some kind of incremental rotation. The algorithm
rotates the shape at incremental steps and checks if it fits in the
space. This is better than the packing method in regards to space
waste but it is painstakingly slow,
What are some other classroom examples for achieving a solution to this problem?
[Edit1] new answer
as mentioned before bin-packing is NP complete (hard) so forget about algebraic solution
known approaches are:
generate and test
either you test all possibility of the problem and remember the best solution or incrementally add items (not all at once) one by one with the same way. It is basically what you are doing now without proper heuristic is unusably slow. But has the best space efficiency (the first one is much better but much slower) O(N!)
take advantage of sorting items by size
something like this it is much faster almost O(N.log(N)) (according to used sorting algorithm). Space efficiency strongly depends on the items size range and count. For rectangular shapes is this the best approach (fastest and usable even for N>1000). For complex shapes is this not a good way but look at it anyway maybe you get some idea ...
use of Neural network
This is extremly vague approach without any warrant of solution but possible best space efficiency/runtime ratio
I think there could be some field approach out there
I sow a few for generating graph layouts. All items create fields (booth attractive and repulsive) so they are moving to semi-stable state.
At first all items are at random locations
When the movement stop remember best solution and shake all items a little or randomize their position again.
Cycle this few times
This approach is much faster then genere and test and can provide very close solution to it but it can hang in local min/max or oscillate if the fields are not optimally choosed. For example all items can have constant attractive force to each other and repulsive force getting stronger only when the items are very close. You have to prevent overlapping of items (either by stronger repulsion or by collision tests). You have also to create some rotation moment for example with that repulsive force. It differs on any vertex so it creates a rotation moment (that can automatically align similar sides closer together). Also you can have semi-stable state with big distances between items and after finding best solution just turn off repulsion fields so they stick together. Sometimes it can have better results some times not ... here is nice example for graph layout computation
Logic to strategically place items in a container with minimum overlapping connections
Demo from the same QA
And here solver for placing sliders in 2D:
How to implement a constraint solver for 2-D geometry?
[Edit0] old answer before reformulating the question
I am not clear what you want to achieve.
have SVG picture and want to separate its parts to rectangular regions
as filled as can be
least empty space in them
no shape change in picture
have svg picture and want to change its shapes according to some purpose
if this is the case some additional info is needed
[solution for 1]
create a list of points for whole SVG in global SVG space (all points are transformed)
for line you need add 2 points
for rectangles 4 points
circle/elipse/bezier/eliptic arc 8 points
find local centres of mass
use classical approach
or can speed things up by computing the average density of points per x,y axis separately and after that just check all combinations of found positions of local max of densities if they really are sub cluster center or not.
all sub cluster center is the center of your region
now find the most far points which are still part of your cluster (the are close enough to neighbour points)
create rectangular area that cover all points from sub cluster.
you also can remove all used points from list
repeat fro all valid sub clusters
until all points are used
another not precise but simpler approach is:
find SVG size
create planar map of svg with some precision for example int map[256][256].
size of map can be constant or with the same aspect as SVG
clear map with 0
for any point of SVG set related map point to 1 (or inc or whatever)
now just segmentate map and you will have find your objects
after segmentation you have position and size of all objects
so finding of bounding boxes should be easy
You can start with a variant of the rectangle bin-packing algorithm and add rotation. There is a method "Guillotine bin packer" and you can download a paper and a library at github.
I've got some OpenGL drawing code that I'm trying to optimize. It's currently testing all drawing objects for visibility client-side before deciding whether or not to send rendering data to OpenGL. (This is easier than it sounds. It's drawing a 2D scene so clipping is trivial: just test against the current coordinates of the viewport rectangle.)
It occurs to me that the entire model could be greatly simplified by passing the entire scene to OpenGL and letting the GPU take care of the clipping. But sometimes the total can be very, very complex, involving up to 100,000 total sprites, most of which never get rendered because they're off-camera, and I'd prefer to not end up killing the framerate in the name of simplicity.
I'm using OpenGL 2.0, and I've got a pretty simple vertex shader and a much more complicated fragment shader. Is there any guarantee that says that if the vertex shader runs and determines coordinates that are completely off-camera for all vertices of a polygon, that a clipping test will be applied somewhere between there and the fragment shader and prevent the fragment shader from ever running for that polygon? And if so, is this automatic or is there something I need to do to enable it? I've looked around online for information on this but I haven't found anything conclusive...
Clipping happens after the vertex transform stage before and after the NDC space; clip planes are applied in clip space, viewport clipping is done in NDC space. That is one step before rasterizing. Clipping means, that a face only partially visible is "cut" by inserting new vertices at the visibility border, or fragments outside the viewport discarded. What you mean is usually called culling. Faces completely outside the viewport are culled, at the same stage like clipping.
From a performance point of view, the best code is code never executed, and the best data is data never accessed. So in your case sending off a single drawing call that makes the GPU process a large batch of vertices clearly takes load off the CPU, but it consumes GPU processing power. Culling those vertices before sending the drawing command consumes CPU power, but takes load off the GPU. The goal is to find the right balance. If the number of vertices is low, a simple brute force approach (just render the whole thing) may easily outperform ever other scheme.
However using a simple, yet effective data management scheme can greatly improve performance on both ends. For example a spatial subdivision structure like a Kd tree is easily built (you don't have to balance it). Sorting the vertices into the Kd tree you can omit (cull) large portions of the tree if one branch near to the root is completely outside the viewport. Preparing drawing a frame you iterate through the visible parts of the tree, building the list of vertices to draw, then you pass this list to the rendering command. Kd trees can be traversed on average in O(n log n) time.
It's important to understand the difference between clipping and culling. You appear to be talking about the latter.
Clipping means taking a triangle and literally cutting it into pieces to fit into the viewport. The OpenGL specification defines this process to happen post-vertex shader, for any triangle that is only partially in view.
Culling means throwing something away entirely. If a triangle is not entirely in view, it can therefore be culled. OpenGL does not say that culling has to happen. Remember: the OpenGL specification defines behavior, not performance.
That being said, hardware makers are not stupid. Obvious efforts like not rasterizing triangles that are outside of the viewport are easily implemented and improve performance. Pretty much any hardware that exists will do this.
Similarly, clipping is typically implemented (where possible) with rasterizer tricks, rather than by creating new triangles. Fragments that would be outside of the viewport simply aren't generated by the rasterizer. This is also legal according to OpenGL, because the spec defines apparent behavior. It doesn't really care if you actually cut the triangle into pieces as long as it looks indistinguishable form if you did.
Your question is essentially one of, "How much work should I do to not render off-screen objects?" That really depends on what your scene is and how you're rendering it. You say you're rendering 100,000 sprites. Are you making 100,000 draw calls, or are these sprites part of larger structures that you render with larger granularity? Do you stream the vertex data to the GPU every frame, or is the vertex data static?
Clipping and culling happen before fragment processing. http://www.opengl.org/wiki/Rendering_Pipeline_Overview
However, you will still be passing 100000 * 4 vertices (assuming you're rendering the sprites with quads and not point sprites) to the card if you don't do culling yourself. Depending on the card's memory performance this can be an issue.
As the title says, I'm fleshing out a design for a 2D platformer engine. It's still in the design stage, but I'm worried that I'll be running into issues with the renderer, and I want to avoid them if they will be a concern.
I'm using SDL for my base library, and the game will be set up to use a single large array of Uint16 to hold the tiles. These index into a second array of "tile definitions" that are used by all parts of the engine, from collision handling to the graphics routine, which is my biggest concern.
The graphics engine is designed to run at a 640x480 resolution, with 32x32 tiles. There are 21x16 tiles drawn per layer per frame (to handle the extra tile that shows up when scrolling), and there are up to four layers that can be drawn. Layers are simply separate tile arrays, but the tile definition array is common to all four layers.
What I'm worried about is that I want to be able to take advantage of transparencies and animated tiles with this engine, and as I'm not too familiar with designs I'm worried that my current solution is going to be too inefficient to work well.
My target FPS is a flat 60 frames per second, and with all four layers being drawn, I'm looking at 21x16x4x60 = 80,640 separate 32x32px tiles needing to be drawn every second, plus however many odd-sized blits are needed for sprites, and this seems just a little excessive. So, is there a better way to approach rendering the tilemap setup I have? I'm looking towards possibilities of using hardware acceleration to draw the tilemaps, if it will help to improve performance much. I also want to hopefully be able to run this game well on slightly older computers as well.
If I'm looking for too much, then I don't think that reducing the engine's capabilities is out of the question.
I think the thing that will be an issue is the sheer amount of draw calls, rather than the total "fill rate" of all the pixels you are drawing. Remember - that is over 80000 calls per second that you must make. I think your biggest improvement will be to batch these together somehow.
One strategy to reduce the fill-rate of the tiles and layers would be to composite static areas together. For example, if you know an area doesn't need updating, it can be cached. A lot depends of if the layers are scrolled independently (parallax style).
Also, Have a look on Google for "dirty rectangles" and see if any schemes may fit your needs.
Personally, I would just try it and see. This probably won't affect your overall game design, and if you have good separation between logic and presentation, you can optimise the tile drawing til the cows come home.
Make sure to use alpha transparency only on tiles that actually use alpha, and skip drawing blank tiles. Make sure the tile surface color depth matches the screen color depth when possible (not really an option for tiles with an alpha channel), and store tiles in video memory, so sdl will use hardware acceleration when it can. Color key transparency will be faster than having a full alpha channel, for simple tiles where partial transparency or blending antialiased edges with the background aren't necessary.
On a 500mhz system you'll get about 6.8 cpu cycles per pixel per layer, or 27 per screen pixel, which (I believe) isn't going to be enough if you have full alpha channels on every tile of every layer, but should be fine if you take shortcuts like those mentioned where possible.
I agree with Kombuwa. If this is just a simple tile-based 2D game, you really ought to lower the standards a bit as this is not Crysis. 30FPS is very smooth (research Command & Conquer 3 which is limited to 30FPS). Even still, I had written a remote desktop viewer that ran at 14FPS (1900 x 1200) using GDI+ and it was still pretty smooth. I think that for your 2D game you'll probably be okay, especially using SDL.
Can you just buffer each complete layer into its view plus an additional tile size for all four ends(if you have vertical scrolling), use the buffer again to create a new buffer minus the first column and drawing on a new end column?
This would reduce a lot of needless redrawing.
Additionally, if you want a 60fps, you can look up ways to create frame skip methods for slower systems, skipping every other or every third draw phase.
I think you will be pleasantly surprised by how many of these tiles you can draw a second. Modern graphics hardware can fill a 1600x1200 framebuffer numerous times per frame at 60 fps, so your 640x480 framebuffer will be no problem. Try it and see what you get.
You should definitely take advantage of hardware acceleration. This will give you 1000x performance for very little effort on your part.
If you do find you need to optimise, then the simplest way is to only redraw the areas of the screen that have changed since the last frame. Sounds like you would need to know about any animating tiles, and any tiles that have changed state each frame. Depending on the game, this can be anywhere from no benefit at all, to a massive saving - it really depends on how much of the screen changes each frame.
You might consider merging neighbouring tiles with the same texture into a larger polygon with texture tiling (sort of a build process).
What about decreasing the frame rate to 30fps. I think it will be good enough for a 2D game.